• Undergraduate
  • High School
  • Architecture
  • American History
  • Asian History
  • Antique Literature
  • American Literature
  • Asian Literature
  • Classic English Literature
  • World Literature
  • Creative Writing
  • Linguistics
  • Criminal Justice
  • Legal Issues
  • Anthropology
  • Archaeology
  • Political Science
  • World Affairs
  • African-American Studies
  • East European Studies
  • Latin-American Studies
  • Native-American Studies
  • West European Studies
  • Family and Consumer Science
  • Social Issues
  • Women and Gender Studies
  • Social Work
  • Natural Sciences
  • Pharmacology
  • Earth science
  • Agriculture
  • Agricultural Studies
  • Computer Science
  • IT Management
  • Mathematics
  • Investments
  • Engineering and Technology
  • Engineering
  • Aeronautics
  • Medicine and Health
  • Alternative Medicine
  • Communications and Media
  • Advertising
  • Communication Strategies
  • Public Relations
  • Educational Theories
  • Teacher's Career
  • Chicago/Turabian
  • Company Analysis
  • Education Theories
  • Shakespeare
  • Canadian Studies
  • Food Safety
  • Relation of Global Warming and Extreme Weather Condition
  • Movie Review
  • Admission Essay
  • Annotated Bibliography
  • Application Essay
  • Article Critique
  • Article Review
  • Article Writing
  • Book Review
  • Business Plan
  • Business Proposal
  • Capstone Project
  • Cover Letter
  • Creative Essay
  • Dissertation
  • Dissertation - Abstract
  • Dissertation - Conclusion
  • Dissertation - Discussion
  • Dissertation - Hypothesis
  • Dissertation - Introduction
  • Dissertation - Literature
  • Dissertation - Methodology
  • Dissertation - Results
  • GCSE Coursework
  • Grant Proposal
  • Marketing Plan
  • Multiple Choice Quiz
  • Personal Statement
  • Power Point Presentation
  • Power Point Presentation With Speaker Notes
  • Questionnaire
  • Reaction Paper
  • Research Paper
  • Research Proposal
  • SWOT analysis
  • Thesis Paper
  • Online Quiz
  • Literature Review
  • Movie Analysis
  • Statistics problem
  • Math Problem
  • All papers examples
  • How It Works
  • Money Back Policy
  • Terms of Use
  • Privacy Policy
  • We Are Hiring

Social Media Is a Threat to Privacy, Essay Example

Pages: 3

Words: 860

Hire a Writer for Custom Essay

Use 10% Off Discount: "custom10" in 1 Click 👇

You are free to use it as an inspiration or a source for your own work.

Introduction

Social networking has been a global phenomenon with the proliferation of various social networking platforms such as Facebook, Twitter, and Instagram. In many ways, social media has replaced conventional modes of communication such as the telephone and email. People can keep in touch with others by sharing experiences and photographs, and most cases exchange personal information. Social media users usually post private information as part of the process of knowing one another. Since social media is associated with having a large number of users unknown to the client, there is an increased risk of exposing personal details to cybercriminals.

Social media is a threat to privacy

Social media has increased privacy concerns with online platforms. Although they are effective in connecting with family and friend, social media can also endanger private information. Individuals create social media profiles that may expose their private information. According to research conducted by Carnegie Mellon University, information found in social media is sufficient to guess one’s social security number, which can lead to identity theft. With the advent of mobile banking applications, more people are login their sensitive data to smartphones, which can endanger their privacy.

Another group whose privacy is in danger is teenagers. Teenagers post a significant amount of information online, which makes it vitally for them to understand the people they share information with or use privacy settings. However, most teenagers are interested with capturing the attention of their peers and in the process posting information that may enhance their status. This information may not seem harmful, but it can be exploited by cyber criminals to get access to the parents.

Several articles have argued about the threat of privacy in online platforms such as social media. In “online privacy: current health 2 by given (2009),” the author argues that online predators can figure out information posted in online platforms, which can be used against the user. Additionally, employers can use the online platform to check out their employees. With the proliferation of electronic health registers, cyber criminals can access an individual’s health information. It presents a significant danger to the individuals concerned because such registers contain vital information such as social security number and insurance details.

In her article titled” should you panic about online privacy?” Palmer (2010) notes that online platforms are a threat to privacy and individual must take measures to protect their personal data. Due to the threat posed by online environments to privacy, Palmer suggests various strategies that user can use to safeguard their privacy. One way of doing this is by removing the birth year from a personal profile in social media networks because the full birth year is often used by banks to categorize their clients. Cyber criminals can use such information to access online banking systems that can compromise user’s safety. Another suggestion given by Palmer is to use antivirus and anti-spyware. These techniques will prevent criminals from exploiting them to access confidential information.

In the piece carried by the New Yorker tiled “the face of Facebook” by José Vargas may present information about Mark Zuckerberg that is public domain, it illustrates how Facebook profiles reveal private and confidential information to virtually anyone on the site. FACEBOOK is a directory of global citizens that affords people the chance to create public identities. Friends can access this information; friends of friends can also access some while some information is also available to anyone interested in them. Although the company has changed it privacy policies severally, it still exposes private information in several ways. From Zuckerberg’s profile, it is possible to know that he has three sisters, where he schooled in, he favorite comedian and musicians, and his interests. His friends can also access his cell-phone number and email address. Additionally, the addition of a feature known as Places, which allows users to mark their location means that someone interested in Zuckerberg’s location can know it anytime. The article by Vargas reveals how easy it is to access one’s private information in Facebook.

Plagiarism, which is using another person’s ideas or creations without giving credit to that person, is another concern in social media. Individuals take information from other sources or individuals and use them as their own, which a common practice in social media. The lack of attribution and fabrication of content are the real issues because users seldom give credit to the source of the content. Despite the fact that social media is for connecting with friends and family, it has been used as social aggregators, which makes it important to give links to the sources of the content.

Social media platforms raise concerns over privacy issues because others can exploit information that is innocently posted on these sites. Cyber criminals can exploit the information to harm the user. It is important to note that different people can access information posted online. Users must take significant steps to protect their information by using anti-spyware software and emitting sensitive information in their profiles.

Given M. (2008). online privacy. Current health 2 . Retrieved from academic search complete.

Palmer L. (2010, 08) should you panic about online privacy? Redbook , Vol. 215 Issue 2, p130

Vargas,J.A. ( 2010, sep20).  The face of Facebook. The New Yorker. Retrieved from http://www.newyorker.com/magazine/2010/09/20/the-face-of-facebook

Stuck with your Essay?

Get in touch with one of our experts for instant help!

Vitamin C Experiment, Essay Example

The Professionalization of Ujwai Bharati, Essay Example

Time is precious

don’t waste it!

Plagiarism-free guarantee

Privacy guarantee

Secure checkout

Money back guarantee

E-book

Related Essay Samples & Examples

Voting as a civic responsibility, essay example.

Pages: 1

Words: 287

Utilitarianism and Its Applications, Essay Example

Words: 356

The Age-Related Changes of the Older Person, Essay Example

Pages: 2

Words: 448

The Problems ESOL Teachers Face, Essay Example

Pages: 8

Words: 2293

Should English Be the Primary Language? Essay Example

Pages: 4

Words: 999

The Term “Social Construction of Reality”, Essay Example

Words: 371

Social Media and Privacy: The Dangers and Privacy Issues

Introduction, an overview of social media, privacy and security issues in social media.

Over the past century, the invention of the computer and the subsequent creation of the internet have been among the major accomplishments in terms of communication advancements. These two entities have been used extensively to revolutionize the world in regard to information processing and communication strategies.

One of the areas that have recorded significant growth as a result of computing and internet advancements is social media. Social networking has attracted millions of users globally and the numbers continue to grow as more people gain access to computers, mobile phones and the internet (Frau-Meigs, 2011). Social networking sites have created avenues through which people can communicate with friends, colleagues and family around the world (Flynn, 2012).

Despite these benefits, social media poses serious security and privacy concerns to users. This paper shall set out to explore the privacy and security risks that users are exposed to. To achieve this aim, social media shall be defined and examples of social media provided.

This shall then be followed by a problem section, which shall focus on the privacy and security issues such as identity theft, spying, and fraud among other cyber crimes that are inherent to social media. The pros and cons of social media shall be discussed and solutions to curb the security and privacy issues proposed.

Social media refers to online and mobile communication technologies that are used by people to engage in interactive dialogue (Bradshaw & Keefer, 2007). Some of the web-based technologies include social networks (Facebook and Myspace), blogging sites (wordpress and typepad), wikis (Wikipedia, Wikia etc.) and business/technical networks (Linkedln among others).

Despite the fact that social media has helped people stay in touch regardless of various constraints such as distance, time and financial aspects, there are privacy and security issues that threaten the effectiveness of social media channels. Privacy in this regard refers to personal information that is sensitive, important and inaccessible by other members of the public (Timm & Duven, 2008, p.90)

Social media can therefore be viewed as an effective tool for enhancing and promoting globalization (Bradshaw & Keefer, 2007). This is attributed to the fact that social media technologies facilitate communication and interactions on a global scale, thereby promoting international relations.

According to Cha (2011), security and privacy problems emanating from social media are classified into behavioral and technical issues. One of the problems associated with social media is the invasion of privacy. This has been attributed to technological advancements that have made invasion of privacy not only feasible, but also achievable.

For example, social networking sites such as Facebook and twitter among others have created an avenue through which personal information can easily be accessed by other people, thereby making invasion of privacy easy (Miller & Wells, 2007). To expound on this, Castro (2010, p.2) states that there have been cases whereby internet companies collect personal information data from these sites and use it in ways that clearly violates the privacy of users.

The author reveals that personal information is at times taken and redistributed to third parties without the consent of knowledge of the user (Castro, 2010, p.2). Distribution of personal information under such circumstances constitutes to a breach of one’s right to privacy.

Similarly, there is no way of guaranteeing anonymity while using social media. In the past, people were encouraged to use pseudonyms in social networking sites in order to protect their real identities (Chen et al, 2008). However, most sites today adopt an arm-twisting approach that forces people to use their real identity and information as they register into these sites (Chen et al, 2008).

Cha (2011) asserts that through technical specifications and registration requirements, people are encouraged to reveal their true identity and personal information before they become members of the online community. In addition, social networking Sites such as Facebook share the IP address of users without their consent every time a person uses its services (Cha, 2011).

One of the main advantages of social media is that they create a platform through which people can share ideas, information and opinions freely (Hong et al, 2005). This posses a serious security and privacy threat on both a national and personal level. This is attributed to the fact that most of information is placed in public domains where it can be viewed by many people. As such, people may react negatively to such information, thereby causing mass panic.

For example, there are comments and messages that promote radicalism. Terrorism acts may emanate from such information in cases where people are mislead into believing that there are injustices going on and they need to act. In addition, photos and comment made on social media sites may be a source of embarrassment in the future and may threaten the careers of many people if used inappropriately (Spon, 2010; Albrechtslund, 2008).

Due to advances in technology, spying has been made easier through social media. People create profiles by giving details about their lives and preferences. As such, stalkers can gather such information and use it in ways that invade the privacy of other people (Albrechtslund, 2008). As such, social media promotes social vices such as spying and stalking. This has been made even worse by the revelation that these sites make the world a more dangerous place for children.

Social media channels have created more risks to children who may be preyed upon by criminals in online environments. According to Barnes (2006), chat rooms and mobile phones have been used to lure young children into meeting strangers who later subject them to various forms of abuse. The author reveals that a significant number of children reported that they were molested by people they met via social media outlets. This shows the security risk children and adults are exposed to through social media.

Albrechtslund (2008) asserts that various governments have expressed their interest in social media since they help governments to profile potential criminals. Through these channels, governments can collect various information on a person such as location of residency, work place, religious and political affiliation and friendship circles. Such data enables different government faculties to closely monitor people and improve their surveillance abilities.

However, this posses a serious security and privacy threat to citizens if such information falls into the hands of oppressive regimes. People may suffer as government agencies monitors and intervenes in all aspects of their lives by tapping into their emails, phone calls and browsing histories (Bennett, 2011).

Social media provides an avenue through which people can communicate and interact with relative ease. However, there are privacy risks that users are exposed to as a result of using various social media such as social networking sites (Goettke & Christiana, 2007). Throughout this report, it has been revealed that most users are unaware of the privacy and security risks they come across in the online environment.

However, the fact still remains that information posted on social media is at risk of being viewed by intended and unintended audiences across the world. As such, Flint (2009) argues that information posted on such media cannot be considered as private since the main intention is to share that information. This is made worse by the fact that people often post persona information without being forced to do so. As such, users of social media are indeed the architects of their own exposure.

To address the privacy and security of personal information posted in social media, developers of applications such as facebook have implemented policies that clearly stipulate how personal information posted on their website is used (Facebook, 2012). This policy also reveals other parties that may have access to personal information and the measures adopted to ensure that this information is protected and individuals’ right to privacy is upheld (Wallbridge, 2009).

In addition, there are data protection legislations, which seek to protect the privacy of online users against malicious business entities and criminals. Most of these laws stipulate that there must be an acceptable purpose before a third party processes the personal information of an individual (Brogan, 2010). However, for such laws and policies to be effective, users have to read and familiarize themselves with them in order to take the appropriate actions.

According to Fischer-Hubner (2008), the rapid rate at which social media are gaining prominence, makes it safe to assume that they will be the main form of communication in the future. This is mainly due to the fact that they are cheap, time-saving and convenient. With this in mind, it would be prudent to implement stricter laws and policies, which safeguard the privacy of users.

This paper set out to explore the privacy and security issues that affect social media. The discussions presented herein reveal that users of social media willingly post personal information, which can be used by malicious criminals and businesses to compromise the privacy and security of individuals in the real world. It has also been observed that people post personal information because they have a false sense of security while using social media.

Such information may be used by governments, criminals and even employers to blackmail, profile and spy on individuals. Despite the fact that there are laws and policies that seek to protect users’ information from such vices, individuals should exercise caution and filter information that they publish on social media, because it becomes public as soon as it is posted. In so doing, users are better placed to avoid the negative implications that result from uploading personal information in social media.

Albrechtslund, A 2008, ‘Online social networking as participatory surveillance’, FirstMonday , vol. 13, no. 3, pp. 16-34.

Barnes, BS 2006, ‘A privacy paradox: social networking in the United States’, FirstMonday , vol. 11, no. 9, pp. 34-39.

Bennett, C 2011, ‘Privacy Advocacy from the Inside and the Outside: Implications for the Politics of Personal Data Protection in Networked Societies’, Journal of Comparative Policy Analysis: Research and Practice, vol. 13, no. 2, pp. 125-141.

Bradshaw, D & Keefer, N 2007, ‘Students and Digital Privacy: From Social Control to Learned Protection and Online Safety’, Theory & Research in Social Education, vol. 35, no. 2, pp. 322-332.

Brogan, C 2010, Social Media 101: Tactics and Tips to Develop Your Business Online, John Wiley and Sons, Boston.

Castro, D 2010, The right to privacy is not a right to Facebook, Information Technology and Innovation Foundation (ITIF), USA.

Cha, J 2011, ‘Information privacy: a comprehensive analysis of information request and privacy policies of most-visited Web sites’, Asian Journal of Communication, vol. 21, no. 6, pp. 613-631.

Chen, H et al 2008, ‘Online privacy control via anonymity and pseudonym: Cross-cultural implications’, Behavior & Information Technology, vol. 27, no. 3, pp. 229-242.

Facebook 2012, About Facebook . Web.

Fischer-Hubner, S et al 2008, The future of identity in the information society: proceedings of the Third IFIP WG 9.2, 9.6/11.6, 11.7/FIDIS International Summer School on the Future of Identity in the Information Society, Karlstad University, Sweden, 2007, Springer, New York.

Flint, D 2009, ‘Law shaping technology: Technology shaping the law, International Review of Law’, Computers & Technology , vol. 23, no. 1, pp. 5–11.

Flynn, N 2012, The Social Media Handbook: Rules, Policies, and Best Practices to Successfully Manage Your Organization’s Social Media Presence, Posts, and Potential, John Wiley & Sons, Boston.

Frau-Meigs, D 2011, Media Matters in the Cultural Contradictions of the “information Society”: Towards a Human Rights-based Governance , Council of Europe, London.

Goettke, R & Christiana, J 2007, Privacy and Online Social Networking Websites. Web.

Hong, T et al 2005, ‘Internet privacy practices of news media and implications for online journalism’, Journalism Studies, vol. 6, no. 1, pp. 15-28.

Miller, C & Wells, S 2007, ‘Balancing Security and Privacy in the Digital Workplace’, Journal of Change Management, vol. 7, no. 4, pp. 315-328.

Spon, M 2010, Is your e. impression costing you the job? Society of Industrial and Organizational Psychology Media. Web.

Timm, M & Duven, C 2008, Privacy and Social Networking Sites, Wiley InterScience.

Wallbridge, R 2009, How safe is Your Facebook Profile? Privacy issues of online social networks, ANU Undergraduate Research Journal , 1(2): 85-92.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2019, May 20). Social Media and Privacy: The Dangers and Privacy Issues. https://ivypanda.com/essays/privacy-and-security-in-social-media-report/

"Social Media and Privacy: The Dangers and Privacy Issues." IvyPanda , 20 May 2019, ivypanda.com/essays/privacy-and-security-in-social-media-report/.

IvyPanda . (2019) 'Social Media and Privacy: The Dangers and Privacy Issues'. 20 May.

IvyPanda . 2019. "Social Media and Privacy: The Dangers and Privacy Issues." May 20, 2019. https://ivypanda.com/essays/privacy-and-security-in-social-media-report/.

1. IvyPanda . "Social Media and Privacy: The Dangers and Privacy Issues." May 20, 2019. https://ivypanda.com/essays/privacy-and-security-in-social-media-report/.

Bibliography

IvyPanda . "Social Media and Privacy: The Dangers and Privacy Issues." May 20, 2019. https://ivypanda.com/essays/privacy-and-security-in-social-media-report/.

  • Business Ethics and Healthcare Responses
  • Facebook Should Be Banned
  • Facebook and Privacy
  • The History of Internet and Internet Security
  • Social Media Ethics Essay: Examples & Definition
  • Cryptography and Privacy Protection
  • Conceptual Exegesis of Web Production: A Cocktail Website
  • The relationship between form and content
  • IEEE Xplore Digital Library
  • IEEE Standards
  • IEEE Spectrum

IEEE Digital Privacy

  • Publications
  • Additional Digital Privacy Resources

Privacy Risks and Social Media

What is the impact of social media on privacy.

The rise of social media has had a profound impact on privacy. Platforms like Facebook, Instagram, and Twitter encourage users to share personal information and details about their lives. This has led to increased visibility and transparency online, obscuring the lines between public and private. Many users are not fully aware of the privacy risks involved in oversharing on social media. User data is collected, analyzed, and monetized by social media companies. There are growing privacy concerns over how this data could be misused or fall into the wrong hands. Increased connectivity on social media also exposes users to various cybersecurity threats. The privacy debate becomes even more complex with new technologies like facial recognition and location tracking embedded in social media apps.

Social media has dramatically changed notions of privacy. While it enables self-expressio n and virtual connections, it also carries risks like profiling, targeted advertising, and mass surveillance. More education is needed to empower users to make informed choices about their privacy. The responsibility is also on social media platforms to ensure transparency and give users more control over their data. Finding the right balance between privacy and openness is crucial as social media use continues to pervade modern life.

Introduction to Privacy Risks on Social Media

Protecting Privacy in the Digital Age

Social media privacy and privacy policies have become major concerns in recent years. The core privacy risks on platforms like Facebook and Twitter include data collection, targeted advertising, tracking user behavior, security breaches and more.

When signing up for social media, users are typically required to provide personal information like name, email, birthdate, interests, location and more. This data is then stored, analyzed and used by the platforms for various purposes. The companies build extensive profiles about their users based on online activity and behaviors. All interactions and posts on social media are analyzed to understand user preferences and habits.

A lot of this personal data is used by social media companies for targeted advertising. By understanding what users like, where they live, and their demographics, extremely customized ads can be delivered. Users often consent to this in lengthy terms of service agreements without realizing how their data will be leveraged. Beyond advertising, user data may also be shared with third parties ranging from marketers to government agencies without social media user knowledge or consent.

These practices clearly pose major privacy risks. Aggregation of personal data always carries the danger of confidential information being hacked or leaked. Social media security breaches can expose user data to cybercriminals who may use it for identity theft, scams, or other illegal activities. There are also concerns around mass surveillance, tracking of sensitive information, and discrimination through user profiling.

Over the years, privacy advocates have protested against the opaque data collection and monitoring practices of social media giants. Public awareness and skepticism about social media privacy has grown substantially. Users are increasingly concerned about their personal information being compromised or misused without their knowledge. Many now demand greater transparency around data handling as well as tools to control their privacy settings. However, social media platforms still have a long way to go in mitigating privacy risks and prioritizing user rights.

Learn more in our course program: Protecting Privacy in the Digital Age

Access the courses

Data Collection Practices on Social Media

Social media networking sites employ various methods to gather user data and build expansive profiles. Understanding how information is collected on platforms like Facebook and Instagram is key to assessing privacy impacts.

At the basic level, social media companies directly ask for personal user information during the sign-up process. This includes full name, email address, phone number, location, date of birth, and more. Verifying email and phone number is often mandatory to open an account.

Beyond self-reported data, a lot of information is gathered indirectly through user activity online. Web tracking tools like cookies, pixels, and APIs monitor behaviors such as posts liked, content shared, pages visited, and search habits. This reveals user interests, beliefs, identity markers, and daily routines. Social media apps leverage the smartphone's sensors, calendar, contacts list and metadata to understand usage patterns as well.

Intricate social graphs are constructed through connections and interactions among users. The activities of a user's network provide additional data points for profiling. Comments, tags, and messages can give insights about relationships, preferences, and offline interactions. Face recognition algorithms applied to photos and videos also power data collection.

Some information is volunteered intentionally by users for specific services. For instance, uploading contacts to find friends, location-sharing, taking personality quizzes, signing into third party apps with social media login, and enabling access to other connected devices. Users are often unaware of how these can expand data gathering.

While data practices vary across platforms, the amount of user information collected is vast and ever-expanding. There are currently no comprehensive laws governing social media data collection. Self-regulation and consent mechanisms have proven inadequate from a privacy standpoint. Critics argue that excessive and non-transparent data mining on social media violates user privacy and facilitates overreach. On the other hand, companies claim data enables personalized experiences and secures platforms.

Users need to be more aware of the manifold ways social networking sites amass personal data. Privacy settings provide some control over sharing preferences, but underlying surveillance remains pervasive. Tighter regulation, improved consent flows, and “privacy by design” approaches have been suggested to realign power dynamics between users and platforms. Careful risk-benefit evaluations regarding data collection are vital for social media services going forward.

Cybersecurity Threats and Social Media

Social media usage comes with various cyber threat and ransomware attack risks that can compromise user privacy and security. Being vigilant and using common sense safeguards is essential.

One of the most common social media threats is phishing attack. Fake social accounts or messages mimic trusted sources to trick users into sharing login credentials or financial information that is then misused for fraud. Phishing content can spread rapidly on social media through seemingly benign posts or ads.

Malware distribution is another prevalent attack, where clicking dubious links or downloading infected files leads to spyware installation on devices. The access this provides to hackers can allow data and identity theft.

Networked nature of social media creates avenues for scams that manipulate users by compromising their friends’ social media accounts. Profile impersonation to defraud acquaintances is also common.

Oversharing personal information carelessly aids profiling that can fuel identity theft and targeted cyber-attacks based on gleaned intelligence. Social engineering thrives on social media, with users more likely to lower guard due to false sense of intimacy with connections.

Lax privacy settings and application permissions granted blindly also expose user content to unintended audiences and cybercriminals. Social platforms are prime hunting grounds for predators seeking victims by mining data.

Most social networks are walled gardens limiting interoperability. This lock-in makes it operationally challenging for users to transition away should a breach occur or terms of service change arbitrarily.

While cyber threats originating externally are serious, social media institutions themselves have also faced security failures exposing vast user data. Their centralized control over huge silos of personal data repeatedly raises stability and accountability concerns.

To exercise caution, users should adopt unique complex passwords, enable two-factor authentication, be wary of requests for sensitive data, think before posting details like travel plans publicly, limit app permissions, and report suspicious activity.

However, the burden cannot fall entirely on individuals. Social media firms need to implement platform-wide defenses like AI detection of threats, strict access controls, and proactive policing considering their scale. Integrating cybersecurity as core design priority is vital to avoid endangering their billions of users.

Social Engineering and Privacy

Social engineering poses a distinct social media privacy and social engineering threat . This technique manipulates natural human tendencies to lower defenses and divulge information that can then be exploited maliciously. Awareness is key to combat such deception.

Social engineering preys on qualities like curiosity, obligation to help others, desire for reward, and fear of violating social conventions. Social media environments amplify these vulnerabilities as users get accustomed to loosely interacting with expanded networks online.

Tactics used in social engineering include phishing, impersonation, grooming, catfishing, spreading misinformation, multi-stage operations, and more. It aims to solicit data, funds, access, or compliance from unwitting users.

For example, a common tactic is creating a fake profile impersonating a friend or authority figure to convince others to share login details or transfer money. Curiosity can be exploited using clickbait posts that entice clicking on malware links. Spreading rumors and falsehoods socially engineer’s belief and action.

Multi-stage strategies combine tactics like first befriending a target through a fake persona, then creating an emergency situation to manipulate urgent aid. Personal details shared online aid these tactics in collecting credible background info and identifying targets.

On social media, direct connections and peer sharing create a veneer of trust. But limited cues in online interactions make impersonation and deception easier compared to real world. Users must therefore exercise more conscious judgment of credibility.

Education on common tactics like authority figure impersonation, urgency creation, or exploiting curiosity/greed reduces user vulnerability. Fact-checking sources and links before sharing combats misinformation tactics. Enabling post/message privacy settings limits data mining.

Critical thinking and verification tools also help assess credibility of requests and unusual activity involving finances, data sharing or recruitment. Seeking confirmation from suspicious contacts via other channels is prudent.

While individuals should take responsibility, social media platforms also need to detect and shutdown fake accounts rapidly. They can enhance identity verification and leverage AI to identify and flag suspicious patterns of information gathering.

Social engineering exploits human inclinations for deceit and profit. With social media embedding deeply across many aspects of life, individuals and institutions both require enhanced literacy to combat sophisticated manipulation threats. Caution and critical thinking are vital.

Legal and Ethical Implications of Social Media Privacy

Social media privacy has complex data privacy and user data legal and ethical dimensions beyond just cybersecurity concerns. Governments, companies and users globally are grappling with regulatory frameworks.

Demand has grown for laws preventing unauthorized collection and usage of personal data that Web 2.0 economics thrive on. Europe's GDPR limits data processing and mandates disclosures, access rights, and consent requirements that impact social media platforms.

The United States lacks omnibus federal laws but has industry-specific regulations for sectors like healthcare and finance. Multiple lawsuits have alleged illegal data collection practices by Facebook and others under federal and state laws. New state-level privacy laws are emerging.

Moves toward greater regulation invoke debate around balancing privacy, innovation, security, and free speech. Overreach stifles progress but inaction enables unchecked overexploitation of user data driven by surveillance capitalism.

With users across age groups and demographics flocking to social media, questions of ethics and integrity have also risen around privacy, especially regarding minors. Social media magnifies risks like bullying, abuse, manipulation and micro-targeting of vulnerable groups. Avoiding such harm merits consideration beyond legal compliance.

As globally networked services, social media platforms have to navigate varying privacy expectations and local laws across nations. For instance, Europe emphasizes individual privacy rights more than North America. A cohesive international framework remains challenging but is important to address given cross-border data flows.

Automated decision-making based on algorithms analyzing user data can reflect and amplify embedded societal biases. Transparency and rights around profiling to prevent discriminatory exploitation are ethically recommended. Allowing user access to their own data hosted by social networks is essential.

Social media companies need to demonstrate greater commitment to moral obligations that respect user privacy and welfare. Though voluntary, ethics shape long-term trust more than laws.

Meanwhile, users should realize that platforms thrive on maximizing data collection and engagement metrics. Participating eyes open by becoming discerning custodians of personal data is crucial. Seeking informed consent and ethical enforcement mechanisms can help strengthen social media privacy foundations.

Social media and privacy represent an evolving interplay between technology, business imperatives, regulation, ethics, and user expectations. As platforms amass more personal data and derive value through micro-targeted advertising, risks of misuse, breaches and overreach grow. Cyber threats and social engineering compound concerns. Thus, debates around legal safeguards and ethical practices are deepening to protect user privacy. While solutions vary across nations, consensus emerges on increasing transparency and user control over data. Training users on privacy issues also matters given social media's network effects and hold over modern lifestyles. Overall, strengthening privacy on social media requires coordinated efforts between lawmakers, companies, civil society and citizens. With growing calls to rethink data commoditization, platforms may need to adopt alternative revenue models that align incentives to respect user privacy. As technologies like AI expand social media capabilities and reach, ongoing deliberation on privacy-centric frameworks for accountability and governance becomes crucial.

Interested in joining IEEE Digital Privacy? IEEE Digital Privacy is an IEEE-wide effort dedicated to champion the digital privacy needs of the individuals. This initiative strives to bring the voice of technologists to the digital privacy discussion and solutions, incorporating a holistic approach to address privacy that also includes economic, legal, and social perspectives. Join the IEEE Digital Privacy Community to stay involved with the initiative program activities and connect with others in the field.

  • Share full article

Advertisement

Supported by

The Battle for Digital Privacy Is Reshaping the Internet

As Apple and Google enact privacy changes, businesses are grappling with the fallout, Madison Avenue is fighting back and Facebook has cried foul.

essay on does social media violate our privacy

By Brian X. Chen

Listen to This Article

SAN FRANCISCO — Apple introduced a pop-up window for iPhones in April that asks people for their permission to be tracked by different apps.

Google recently outlined plans to disable a tracking technology in its Chrome web browser.

And Facebook said last month that hundreds of its engineers were working on a new method of showing ads without relying on people’s personal data.

The developments may seem like technical tinkering, but they were connected to something bigger: an intensifying battle over the future of the internet. The struggle has entangled tech titans, upended Madison Avenue and disrupted small businesses. And it heralds a profound shift in how people’s personal information may be used online, with sweeping implications for the ways that businesses make money digitally.

At the center of the tussle is what has been the internet’s lifeblood: advertising .

More than 20 years ago, the internet drove an upheaval in the advertising industry. It eviscerated newspapers and magazines that had relied on selling classified and print ads, and threatened to dethrone television advertising as the prime way for marketers to reach large audiences.

Instead, brands splashed their ads across websites, with their promotions often tailored to people’s specific interests. Those digital ads powered the growth of Facebook, Google and Twitter, which offered their search and social networking services to people without charge. But in exchange, people were tracked from site to site by technologies such as “ cookies, ” and their personal data was used to target them with relevant marketing.

Now that system, which ballooned into a $350 billion digital ad industry, is being dismantled. Driven by online privacy fears, Apple and Google have started revamping the rules around online data collection. Apple, citing the mantra of privacy, has rolled out tools that block marketers from tracking people. Google, which depends on digital ads, is trying to have it both ways by reinventing the system so it can continue aiming ads at people without exploiting access to their personal data.

essay on does social media violate our privacy

If personal information is no longer the currency that people give for online content and services, something else must take its place. Media publishers, app makers and e-commerce shops are now exploring different paths to surviving a privacy-conscious internet, in some cases overturning their business models. Many are choosing to make people pay for what they get online by levying subscription fees and other charges instead of using their personal data.

Jeff Green, the chief executive of the Trade Desk, an ad-technology company in Ventura, Calif., that works with major ad agencies, said the behind-the-scenes fight was fundamental to the nature of the web.

“The internet is answering a question that it’s been wrestling with for decades, which is: How is the internet going to pay for itself?” he said.

The fallout may hurt brands that relied on targeted ads to get people to buy their goods. It may also initially hurt tech giants like Facebook — but not for long. Instead, businesses that can no longer track people but still need to advertise are likely to spend more with the largest tech platforms, which still have the most data on consumers.

David Cohen, chief executive of the Interactive Advertising Bureau, a trade group, said the changes would continue to “drive money and attention to Google, Facebook, Twitter.”

The shifts are complicated by Google’s and Apple’s opposing views on how much ad tracking should be dialed back. Apple wants its customers, who pay a premium for its iPhones, to have the right to block tracking entirely. But Google executives have suggested that Apple has turned privacy into a privilege for those who can afford its products.

For many people, that means the internet may start looking different depending on the products they use. On Apple gadgets, ads may be only somewhat relevant to a person’s interests, compared with highly targeted promotions inside Google’s web. Website creators may eventually choose sides, so some sites that work well in Google’s browser might not even load in Apple’s browser, said Brendan Eich, a founder of Brave, the private web browser.

“It will be a tale of two internets,” he said.

Businesses that do not keep up with the changes risk getting run over. Increasingly, media publishers and even apps that show the weather are charging subscription fees, in the same way that Netflix levies a monthly fee for video streaming. Some e-commerce sites are considering raising product prices to keep their revenues up.

Consider Seven Sisters Scones, a mail-order pastry shop in Johns Creek, Ga., which relies on Facebook ads to promote its items. Nate Martin, who leads the bakery’s digital marketing, said that after Apple blocked some ad tracking, its digital marketing campaigns on Facebook became less effective. Because Facebook could no longer get as much data on which customers like baked goods, it was harder for the store to find interested buyers online.

“Everything came to a screeching halt,” Mr. Martin said. In June, the bakery’s revenue dropped to $16,000 from $40,000 in May.

Sales have since remained flat, he said. To offset the declines, Seven Sisters Scones has discussed increasing prices on sampler boxes to $36 from $29.

Apple declined to comment, but its executives have said advertisers will adapt. Google said it was working on an approach that would protect people’s data but also let advertisers continue targeting users with ads.

Since the 1990s, much of the web has been rooted in digital advertising. In that decade, a piece of code planted in web browsers — the “cookie” — began tracking people’s browsing activities from site to site. Marketers used the information to aim ads at individuals, so someone interested in makeup or bicycles saw ads about those topics and products.

After the iPhone and Android app stores were introduced in 2008, advertisers also collected data about what people did inside apps by planting invisible trackers. That information was linked with cookie data and shared with data brokers for even more specific ad targeting.

The result was a vast advertising ecosystem that underpinned free websites and online services. Sites and apps like BuzzFeed and TikTok flourished using this model. Even e-commerce sites rely partly on advertising to expand their businesses.

But distrust of these practices began building. In 2018, Facebook became embroiled in the Cambridge Analytica scandal, where people’s Facebook data was improperly harvested without their consent. That same year, European regulators enacted the General Data Protection Regulation , laws to safeguard people’s information. In 2019, Google and Facebook agreed to pay record fines to the Federal Trade Commission to settle allegations of privacy violations.

In Silicon Valley, Apple reconsidered its advertising approach. In 2017, Craig Federighi, Apple’s head of software engineering, announced that the Safari web browser would block cookies from following people from site to site.

“It kind of feels like you’re being tracked, and that’s because you are,” Mr. Federighi said. “No longer.”

Last year, Apple announced the pop-up window in iPhone apps that asks people if they want to be followed for marketing purposes. If the user says no, the app must stop monitoring and sharing data with third parties.

That prompted an outcry from Facebook , which was one of the apps affected. In December, the social network took out full-page newspaper ads declaring that it was “standing up to Apple” on behalf of small businesses that would get hurt once their ads could no longer find specific audiences.

“The situation is going to be challenging for them to navigate,” Mark Zuckerberg, Facebook’s chief executive, said.

Facebook is now developing ways to target people with ads using insights gathered on their devices, without allowing personal data to be shared with third parties. If people who click on ads for deodorant also buy sneakers, Facebook can share that pattern with advertisers so they can show sneaker ads to that group. That would be less intrusive than sharing personal information like email addresses with advertisers.

“We support giving people more control over how their data is used, but Apple’s far-reaching changes occurred without input from the industry and those who are most impacted,” a Facebook spokesman said.

Since Apple released the pop-up window, more than 80 percent of iPhone users have opted out of tracking worldwide, according to ad tech firms. Last month, Peter Farago, an executive at Flurry, a mobile analytics firm owned by Verizon Media, published a post on LinkedIn calling the “time of death” for ad tracking on iPhones.

At Google, Sundar Pichai, the chief executive, and his lieutenants began discussing in 2019 how to provide more privacy without killing the company’s $135 billion online ad business. In studies, Google researchers found that the cookie eroded people’s trust. Google said its Chrome and ad teams concluded that the Chrome web browser should stop supporting cookies.

But Google also said it would not disable cookies until it had a different way for marketers to keep serving people targeted ads. In March, the company tried a method that uses its data troves to place people into groups based on their interests, so marketers can aim ads at those cohorts rather than at individuals. The approach is known as Federated Learning of Cohorts, or FLOC.

Plans remain in flux. Google will not block trackers in Chrome until 2023 .

Even so, advertisers said they were alarmed.

In an article this year, Sheri Bachstein, the head of IBM Watson Advertising, warned that the privacy shifts meant that relying solely on advertising for revenue was at risk. Businesses must adapt, she said, including by charging subscription fees and using artificial intelligence to help serve ads.

“The big tech companies have put a clock on us,” she said in an interview.

Kate Conger contributed reporting.

Brian X. Chen is the lead consumer technology writer for The Times. He reviews products and writes Tech Fix , a column about the social implications of the tech we use. Before joining The Times in 2011, he reported on Apple and the wireless industry for Wired. More about Brian X. Chen

  • Commercial AI Use
  • Gov't AI Use
  • AI in the Criminal Justice System
  • Screening & Scoring
  • FTC Commercial Surveillance Rulemaking

Social Media Privacy

  • Data Brokers
  • Communications Privacy
  • Competition and Privacy
  • Web Scraping
  • Article III Standing
  • Data Security
  • Election Security
  • Presidential Directives
  • International Privacy
  • Enforcement of Privacy Laws
  • Government Records & Privacy
  • Location Tracking
  • Children's Privacy
  • Student Privacy
  • Health Privacy
  • Workplace Privacy
  • Privacy & Racial Justice
  • Voter Privacy
  • Census Privacy
  • Donor Privacy
  • Online Harassment
  • Access to Information
  • Freedom of Information Act
  • Federal Advisory Committee Act
  • Privacy Impact Assessments
  • Fourth Amendment
  • International Privacy Laws
  • U.S. Privacy Laws
  • Proposed U.S. Legislation
  • State Privacy Laws
  • FISA Section 702
  • Face Surveillance & Biometrics
  • Drones & Aerial Surveillance
  • Traveler Screening & Border Surveillance
  • Privacy in Public
  • Intelligence Surveillance
  • Government Databases
  • PATRIOT Act
  • Wiretapping
  • EPIC Publications
  • Digital Library
  • EPIC in the News
  • EPIC Statements
  • EPIC Commentaries
  • EPIC Projects
  • Board & Staff
  • EPIC Advisory Board
  • Careers & Internships
  • Being a Non-profit
  • Cy Pres Awards
  • Privacy Policy

Consumer Privacy

Too many social media platforms are built on excessive collection, algorithmic processing, and commercial exploitation of users’ personal data. That must change.

Amicus Briefs

APA Comments

Consumer Cases

Over the past two decades, social media platforms have become vast and powerful tools for connecting, communicating, sharing content, conducting business, and disseminating news and information. Today, millions or billions of users populate major social networks including Facebook, Instagram, TikTok, Snapchat, YouTube, Twitter, LinkedIn, and dating apps like Grindr and Tinder.

But the extraordinary growth of social media has given platforms extraordinary access and influence into the lives of users. Social networking companies harvest sensitive data about individuals’ activities, interests, personal characteristics, political views, purchasing habits, and online behaviors. In many cases this data is used to algorithmically drive user engagement and to sell behavioral advertising—often with distortive and discriminatory impacts. 

The privacy hazards of social networks are compounded by platform consolidation, which has enabled some social media companies to acquire competitors, exercise monopolistic power, and severely limit the rise of privacy-protective alternatives. Personal data held by social media platforms is also vulnerable to being accessed and misused by third parties, including law enforcement agencies.

As EPIC has long urged, Congress must enact comprehensive data protection legislation to place strict limits on the collection, processing, use, and retention of personal data by social networks and other entities. The Federal Trade Commission should also make use of its existing authority to rein in abusive data practices by social media companies, and both the FTC and Congress must take swift action to prevent monopolistic behavior and promote competition in the social media market.

Social Media & Surveillance Advertising

Social media companies—and in particular, Facebook—collect vast quantities of personal data in order to “microtarget” advertisements to users. This practice, also known as surveillance advertising or behavioral advertising, is deeply harmful to privacy, the flow of information, and the psychological health of social media users. 

As former FTC Commissioner Rohit Chopra  wrote  in his dissent from the FTC’s 2019 Facebook  order , “Behavioral advertising generates profits by turning users into products, their activity into assets, their communities into targets, and social media platforms into weapons of mass manipulation.” Chopra went on to explain how surveillance advertising operates in Facebook’s case:

To maximize the probability of inducing profitable user engagement, Facebook has a strong incentive to (a) increase the total time a user engages with the platform and (b) curate an environment that goads users into monetizable actions.  To accomplish both of these objectives, Facebook and other companies with a similar business model have developed an unquenchable thirst for more and more data. This data goes far beyond information that users believe they are providing, such as their alma mater, their friends, and entertainers they like. Facebook can develop a detailed, intimate portrait of each user that is constantly being updated in real time, including our viewing behavior, our reactions to certain types of content, and our activities across the digital sphere where Facebook’s technology is embedded. The company can make more profit if it can manipulate us into constant engagement and specific actions aligned with its monetization goals.  As long as advertisers are willing to pay a high price for users to consume specific content, companies like Facebook have an incentive to curate content in ways that affect our psychological state and real-time preferences.

Notably, tracking and behavioral advertising by social media companies is not limited to the platforms themselves. Firms like Facebook use hard-to-detect tracking techniques to follow individuals across a variety of apps, websites, and devices. As a result, even those who intentionally opt out of social media platforms are affected by their data collection and advertising practices.

Social Media & Competition

Data collection is at the core of many social media platforms’ business models. For this reason, mergers and acquisitions involving social networks pose acute risks to consumer privacy. Yet in recent years, platforms that have promised to protect user privacy have been repeatedly taken over by companies that fail to protect user privacy.

One of the most notable examples of this trend is Facebook’s 2014 purchase of WhatsApp, a messaging service that attracted users precisely  because  of strong commitments to privacy. WhatsApp’s founder stated in 2012 that, “[w]e have not, we do not and we will not ever sell your personal information to anyone.” Although EPIC and the Center for Digital Democracy  urged  the FTC to block the proposed Facebook-WhatsApp deal, the FTC ultimately  approved  the merger after both companies promised not to make any changes to WhatsApp user privacy settings. 

However, Facebook  announced  in 2016 that it would begin acquiring the personal information of WhatsApp users, directly contradicting their previous promises to honor user privacy. Antitrust authorities in the EU  fined  Facebook $122 million in 2017 for making deliberately false representations about the company’s ability to integrate the personal data of WhatsApp users. Yet the FTC took no further action at the time. It wasn’t until the FTC’s 2020  antitrust lawsuit  against Facebook—six years after the merger—that the FTC publicly identified Facebook’s acquisition of WhatsApp as part of a pattern of anticompetitive behavior.

For many years, the United States stood virtually alone in its unwillingness to address privacy as an important dimension of competition in the digital marketplace. With the 2020 wave of  federal and state antitrust lawsuits  against Facebook and Google—and with a renewed interest in antitrust enforcement at the FTC—that dynamic may finally be changing. But moving forward, it is vital that antitrust enforcers take data protection and privacy into account in their antitrust enforcement actions and assessments of market competition. If the largest social media platforms continue to buy up new market entrants and assimilate their users’ data into the existing platforms, there will be no meaningful opportunity for other firms to compete with better privacy and data security practices. 

Social Media & Data Breaches

The massive stores of personal data that social media platforms collect and retain are vulnerable to hacking, scraping, and data breaches, particularly if platforms fail to institute critical security measures and access restrictions. Depending on the network, the data at risk can include location information, health information, religious identity, sexual orientation, facial recognition imagery, private messages, personal photos, and more. The consequences of exposing this information can be severe: from  stalking  to the forcible  outing  of LGBTQ individuals to the  disclosure  of one’s religious practices and movements. 

Without federal comprehensive privacy legislation, users often have little protection against data breaches. Although social media companies typically publish privacy policies, these policies are wholly inadequate to protect users’ sensitive information. Privacy policies are disclaimers published by platforms and websites that purport to operate as waivers once users “consent” to them. But these policies are often vague, hard to interpret, full of loopholes, subject to unilateral changes by the platforms, and difficult or impossible for injured users to enforce. 

EPIC’s Work on Social Media Privacy

For more than a decade, EPIC has advocated before Congress, the courts, and the Federal Trade Commission to protect the privacy of social media users.

Beginning in 2008, EPIC warned of the exact problem that would later lead to the Facebook  Cambridge Analytica scandal . In  Senate testimony  in 2008, then-EPIC President Marc Rotenberg stated that, “on Facebook … third party applications do not only access the information about a given user that has added the application. Applications by default get access to much of the information about that user’s friends.” 

In 2009, EPIC and nine other public interest organizations filed a  complaint  with the FTC detailing how Facebook changed its privacy settings to begin disclosing information to third-party applications and the public which users had sought to keep private. Facebook implemented these changes without obtaining affirmative consent from its users or even giving them the ability to opt out. In 2011, the FTC  announced  that Facebook had settled charges that it deceived users by failing to keep its privacy promises and credited EPIC with providing the factual basis for its complaint against Facebook.

In 2014, EPIC filed a  complaint  with the FTC alleging that Facebook “altered the News Feeds of Facebook users to elicit positive and negative emotional responses.” Facebook had teamed up with researchers to conduct a  psychological experiment  by exposing one group of users to positive emotional content and another group of users to negative emotional content to determine whether users would alter their own posting behavior. The study found that “emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.” EPIC alleged that the researchers who conducted the study “failed to follow standard ethical protocols for human subject research.” EPIC further alleged that Facebook engaged in unfair and deceptive practices in violation of Section 5 of the FTC Act by not informing users that they were potentially subject to behavioral testing. Finally, EPIC alleged that Facebook’s psychological study violated the  2011 FTC Consent Order  by misrepresenting its data collection practices.

In 2014, when Facebook entered a deal to acquire the text-messaging application  WhatsApp , EPIC and the Center for Digital Democracy filed a  complaint  with the FTC urging the Commission to block Facebook’s acquisition of WhatsApp unless adequate privacy safeguards were established. Although the FTC approved the merger, the Commission sent a letter to Facebook and WhatsApp notifying the companies of their obligations to honor their privacy promises. In 2016, WhatsApp  announced  its plans to transfer users’ personal information to Facebook for use in targeted advertising. 

In March 2018, news broke that Facebook had allowed  Cambridge Analytica , a political data mining firm associated with the Trump campaign, to access personal information on 87 million Facebook users. EPIC and a coalition of consumer organizations immediately wrote a  letter  to the FTC urging it to investigate this unprecedented disclosure of personal data. The groups made clear that by exposing users’ personal data without their knowledge or consent, Facebook had violated the 2011 Consent Order with the FTC, which made it unlawful for Facebook to disclose user data without affirmative consent. The groups wrote that, “The FTC’s failure to enforce its order has resulted in the unlawful transfer of [87] million user records … [i]t is unconscionable that the FTC allowed this unprecedented disclosure of Americans’ personal data to occur. The FTC’s failure to act imperils not only privacy but democracy as well.”

EPIC also submitted an  urgent FOIA request  to the FTC following the Cambridge Analytica revelations. The  request  sought all the privacy assessments required by the FTC’s 2011 Order and all communications between the FTC and Facebook regarding those privacy assessments. Following the FTC’s  release  of heavily redacted versions of the assessments, EPIC filed a Freedom of Information Act  lawsuit  to obtain the full, unredacted reports from the FTC.

In 2019, following a proposed  settlement  between the FTC and Facebook in connection with the Cambridge Analytica breach, EPIC  moved to intervene  in  United States v. Facebook  to protect the interests of Facebook users. EPIC argued in the case that the settlement was “not adequate, reasonable, or appropriate.” 

In 2020, following President Trump’s threat to effectively ban social network TikTok from the United States, Oracle reached a  tentative agreement  to serve as TikTok’s U.S. partner and to “ independently process  TikTok’s U.S. data.” In response, EPIC sent demand letters to  Oracle  and  TikTok  warning both of their legal obligation to protect the privacy of TikTok users if the companies entered a partnership. The deal would have paired one of the largest brokers of personal data with a network of 800 million users, creating grave privacy and legal risks. “Absent strict privacy safeguards, which to our knowledge Oracle has not established, [the] collection, processing, use, and dissemination of TikTok user data would constitute an unlawful trade practice,” EPIC wrote. In 2021, the Oracle-TikTok deal was  effectively scuttled . 

Also in 2020, EPIC and coalition of child advocacy, consumer, and privacy groups filed a  complaint  urging the Federal Trade Commission to investigate and penalize TikTok for violating the  Children’s Online Privacy Protection Act . TikTok paid a  $5.7 million fine  for violating the children’s privacy law in 2019. Nevertheless, TikTok failed to delete personal information previously collected from children and was still collecting kids’ personal information without notice to and consent of parents. 

Recent Documents on Social Media Privacy

In re: safeguarding and securing the open internet, epic comments on colorado universal opt-out mechanism (uoom) shortlist, netchoice v. paxton / moody v. netchoice.

US Supreme Court

Whether the First Amendment prevents nearly all regulation of social media companies' content-hosting and content-arranging decisions.

Aleksandr Kogan and Alexander Nix; Analysis to Aid Public Comment

Examining facebook’s proposed digital currency and data privacy considerations, carr v. department of transportation.

Pennsylvania Supreme Court

Whether the First Amendment protects a public employee from being fired for a Facebook post

In re: Facebook, Inc. Internet Tracking Litigation

US Court of Appeals for the Ninth Circuit

Whether Facebook violated the privacy rights of users by tracking their web browsing history even after they logged out of the platform

EPIC v. FTC (Facebook Assessments)

US District Court for the District of Columbia

Seeking disclosure of Facebook assessments, reports, and related records required by the 2012 FTC Consent Order

In re Facebook and Facial Recognition (2018)

Charging that Facebook's facial recognition practice lacks privacy safeguards and violates the 2011 Consent Order with the FTC

Top Updates

Senate ai roadmap fails to recognize or address ai harms.

May 15, 2024

essay on does social media violate our privacy

EPIC-Led Coalition Applauds FCC Classifying ISPs as Common Carriers, Urges Immediate Privacy Rulemaking

December 15, 2023

EPIC Provides Feedback to Colo. AG on Possible Universal Opt-Out Mechanisms

December 12, 2023

Sarah Gilbert, Jessica Vitak, & Katie Shilton | 2021

Shoshana Zuboff | 2021

Kate Klonick | 2020

Roger McNamee | 2019

Shoshana Zuboff | 2019

Ari Ezra Waldman | 2016

essay on does social media violate our privacy

Support Our Work

EPIC's work is funded by the support of individuals like you, who help us to continue to protect privacy, open government, and democratic values in the information age.

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical Literature
  • Classical Reception
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Archaeology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Agriculture
  • History of Education
  • History of Emotions
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Variation
  • Language Families
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Lexicography
  • Linguistic Theories
  • Linguistic Typology
  • Linguistic Anthropology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Modernism)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Culture
  • Music and Religion
  • Music and Media
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Society
  • Law and Politics
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Oncology
  • Medical Toxicology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Medical Ethics
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Games
  • Computer Security
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Neuroscience
  • Cognitive Psychology
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business History
  • Business Strategy
  • Business Ethics
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Methodology
  • Economic Systems
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Theory
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Oxford Handbook of Digital Ethics

  • < Previous chapter
  • Next chapter >

Oxford Handbook of Digital Ethics

29 Privacy in Social Media

Andrei Marmor, Jacob Gould Schurman Professor of Philosophy and Law, Cornell University

  • Published: 10 November 2021
  • Cite Icon Cite
  • Permissions Icon Permissions

Most people’s immediate concern about privacy in social media, and about the internet more generally, relates to data protection. People fear that information they post on various platforms is potentially abused by corporate entities, governments, or even criminals, in all sorts of nefarious ways. The main premise of this chapter is that concerns about data protection, legitimate and serious as they may be, are not, mostly, about the right to privacy. Privacy is about control over the presentation of the self, not about protection of property rights. From the perspective of privacy as self-presentation, I argue that social media is, generally, very conducive to privacy—in fact, often too much so. Social media enables a great deal of privacy at the expense of truth and authenticity. But the medium also comes with dangers of exposure that carry serious risks to privacy, potentially undermining peoples’ ability to control what aspects of themselves they present to others. Privacy in social media is a mixed bag, containing different goods and dangers pulling in opposite directions.

Introduction

Ms Lisa Li, a famous young influencer in China, flaunting a glamorous and lavish lifestyle, with over one million followers, became rather notorious overnight. Her landlord, upset by Ms Li’s unpaid bills and failure to clean up her apartment, exposed to the world her absolutely squalid living conditions. A video posted by the landlord showed Ms Li’s sordid apartment with dog faeces in the living room, and generally so filthy that allegedly even professionals refused to clean up the place. Not surprisingly, reaction on social media instantaneously recorded hostile posts, with tens of thousands of people unfollowing her overnight, and countless expressions of outrage. Ms Li seems to have survived the media onslaught and recovered her reputation since, but her story encapsulates many of the privacy issues that come up in social media. Ms Li’s story exemplifies how a young woman of modest means can turn herself into a social media celebrity, presenting to the world a personal lifestyle far removed from reality. But it also shows how reputation gained over years of hard work can be shattered in an instant, turning fame and glamour to ridicule and outrage overnight. 1

You may wonder why any of these issues involve moral concerns about the right to privacy. After all, most people’s immediate concern about privacy in social media, and internet platforms more generally, relate to data protection. People fear that information they post on various platforms, explicitly or implicitly, is gathered, compiled, and potentially abused by corporate entities, governments, or even criminals, in all sorts of nefarious ways. 2 On the contrary, I am going to argue in this chapter that concerns about data protection, legitimate and serious as they may be, are not, mostly, about the right to privacy. Privacy is about the presentation of the self, not about protection of proprietary rights. And I am going to argue that social media is, generally, conducive to privacy—in fact, often too much so. The main tension in the domain of social media is between privacy and authenticity: social media enables a great deal of privacy at the expense of authenticity. But it also comes with dangers of exposure that carry risks to privacy. On the whole, then, the state of privacy in social media is a mixed bag. Social media is generally conducive to privacy, often too much so; and it also comes with serious risks to privacy, even if, as often is the case, those risks are self-imposed.

What is the right to privacy and how does it conflict with authenticity?

In previous work, I have argued that the main interest protected by the right to privacy is our interest in having a reasonable measure of control over ways we present aspects of ourselves to different others ( Marmor 2015 ). Having a reasonable amount of control over various aspects of ourselves that we present to different others is essential for our well being; it enables us to have the necessary means to navigate our place in the social world and to have reasonable control over our social lives. We need to have the ability to maintain different types of relationships with different people, and that would not be possible without having control over how we present ourselves to different others. Different types of relationships are constituted by different types of expectations about what aspects of ourselves we reveal to each other. Intimate relationships and friendships, for example, are partly constituted by expectations of sharing information and revealing aspects of ourselves that we would not be willing to share with strangers. But we cannot live in a social world that requires constant intimacy either; the possibility of dealing with others at arms lengths, keeping some distance, is as important as the opportunity for intimate relationships. Additionally, we also need to have some space to engage in various innocuous activities without necessarily inviting social scrutiny. For all these, and similar reasons, it is essential for our well being that we have a reasonable level of control over which aspects of ourselves we reveal to different others. This is the main interest that is protected by the right to privacy.

The interest in the protection of personal data that we post or reveal on internet platforms is typically an interest in protecting our property. There is a huge amount of information about ourselves, and our possessions, that we reveal, often without our knowledge, by using the internet, smartphones, and such. But there are two kinds of concerns about this information falling into the wrong hands, as it were. For one, there is the fairly straightforward concern about theft. A great deal of the information that we allow internet platforms to use or to store can be used to steal our financial ‘identity’, empty our bank accounts, charge us for goods and services we had not ordered, and all sorts of similar proprietary misdeeds. These concerns have very little to do with privacy. When someone uses or takes something that belongs to you without your permission, they violate your right to property, not to privacy ( Thomson 1975 ). 3

The second kind of concern relates to so-called ‘big data’ collected by corporations about our consumer profiles, interests, and habits. And this is a tricky matter from a privacy perspective. Most often the kind of information that is gathered, if looked at in isolation, is not the kind of fact about us that we can legitimately expect to keep to ourselves. When you go out to buy a pair of shoes in a store you cannot expect not to be observed by others. Buying those same shoes online should make no difference in this respect. After all, it needs to be charged to your credit card and delivered to your home. In isolation, there is nothing here that should make anyone worry about their privacy. Problems begin to surface with extensive and repeated data collection, that is, when somebody (or some computer algorithm, to be more precise) collects, analyses, and stores data about everything you buy; and perhaps everywhere you happen to go with your smartphone, and every phone number you call up, and so on and so forth. This is when people begin to worry about their privacy, and to some extent, rightly so. But as I will try to show later, this worry is not easy to articulate and it is subject to reasonable disagreement. Before we get there, however, other aspects of privacy in social media will be explored. I’ll get back to the big data question towards the end.

Let us return to the main interest protected by the right to privacy. It is crucial to note that the level of control over which aspects of ourselves we reveal to others needs to be reasonable, not limitless. Having too much control over what aspects of one’s self one can reveal to others compromises authenticity. But this is complicated. On the one hand, there seems to be nothing wrong with withdrawing from the social world, living your life without anyone knowing anything about you. Perhaps your life would not be as rich and rewarding as it could have been, but you commit no wrong by imposing seclusion on yourself. On the other hand, it does seem to be wrong, in some sense, if you manage to get people to believe that you are something quite different from what you really are. An intensely selfish person who manages to get people to believe that she is generous and kind engages in a form of deceit that we may rightly frown upon, even criticize and condemn. Being intensely selfish is bad enough, creating the false impression in others that you are generous makes things even worse. Now, it might be tempting to think that the distinction here pertains to the difference between not revealing things about yourself, which is normally permissible, and actively presenting yourself in ways that are not your authentic self, that is, creating false impressions, which is often wrong. But this action-omission distinction is not going to do all the work here. There are ways of not being truthful or authentic by just keeping quiet. If you are mistakenly introduced in a party as somebody else, then keeping quiet about it might be as much of a lie as knowingly telling a falsehood. But that does not mean that failing to reveal the truth about yourself is always wrong; far from it. Most people normally want to look and seem better than they are, and there is nothing wrong about that. You do not have to post the most authentic selfie on your Instagram page; posting a particularly flattering one is not deceitful.

The story of Ms Lisa Li, however, is a good reminder that authenticity on social media is compromised well beyond flattering pics and self-congratulating presentations. The main danger facing the value of authenticity in the social media context is that the distinction between truth and fiction gets blurred; one often does not know, and many people seem not to care all that much, what is presented as truth and what is clearly just fiction. This is not a threat to privacy. On the contrary, it is often too much privacy at the expense of truth and authenticity. The following section explains both of these claims.

Privacy, authenticity, and fiction

Social media, like Facebook, Twitter, Instagram, and similar platforms, enable people to present aspects of themselves to others in ways that they could not have done without these tools. It enables people to reach a very wide audience at very low cost and almost instantaneously; but more importantly for our concerns here, social media gives people a tremendous amount of choice and control over what aspects of themselves they present to others, including the option of presenting totally fictitious ‘aspects’ of themselves, inventing a public persona that may have very little to do with reality. Even ordinary users of Facebook, who just want to connect with their friends, tend to post aspects of their lives rather selectively, conscious of constructing an image of their lives in forms they wish their audience to perceive. In actuality, the range of self-construction here is very wide, from minor self-flattering images or posts, to outright large-scale deceit, with all the spectrum in between. Since the main interest protected by the right to privacy is precisely the interest in having control over what aspects of yourself you present to different others, it would seem that social media, quite generally, is very conducive to privacy. It enables people to have a great deal of control over their self-presentation, much greater in scope than hitherto possible. 4 Hence, the first question here is not whether social media threatens privacy, but whether it enables it too much: do we get to have too much control over what aspects of ourselves we present to others?

Part of what makes answering this question difficult is the fact that the kind of creative construction of the self enabled by social media is common knowledge. Everybody knows that the persona I present on Facebook or Instagram is somewhat constructed, that it does not necessarily reflect reality. Both users and consumers of the medium realize that fact and fiction are mixed up; it is part of the game, as it were. In other words, people do not necessarily expect full authenticity on social media; they seem to be content to create and to consume the presentations of partly fictitious selves for the sake of other values, knowing, at least at the back of their minds, that authenticity is not assured. 5 But that does not, by itself, settle the question of whether too much authenticity is sacrificed here; even if the sacrifice of truth and authenticity is on the surface and willingly consumed, it might still be a bad state of affairs.

Authenticity might mean different things in different contexts. In one sense, people think of authenticity in terms of a match between one’s deep self, one’s deep character traits, true desires, etc., and the life one lives. An inauthentic person, on this understanding, is one whose desires, plans, and aspirations in life do not quite match what she really is, deep down, as it were. If I tell myself that I love doing philosophy and live my life with that story, while the truth is that deep down I am not all that interested in philosophy, then I am not authentic, in this sense. However, this deep sense of authenticity is not what I am going to refer to here; what I have in mind is a shallower sense, one that refers to the truth or falsehood of one’s self-presentation to others. You fail to be authentic, on this shallow conception, if you present yourself to others in a way that is, as a matter of fact, false about you. The main difference between the deep and the shallow conceptions of authenticity is that the deep form of inauthenticity involves self-deception, while the shallow sense of it does not necessarily involve any self-deception; the deception or inauthenticity in the shallow sense can be entirely self-conscious. In both cases, however, the value of authenticity is very closely tied with the value of truth. In the deep sense, it is the truth to yourself, truth about what you really want, what you really care about, and things like that. In the shallow sense, and the one that is relevant to our concerns here, the truth in question is public. A presentation that is inauthentic is one that attempts to induce others to have false (or grossly inaccurate) beliefs about certain aspects of your self.

To be sure, I am not assuming here that revealing the truth about one’s self is always valuable or that any type of deception is bad. Far from it. As Thomas Nagel (1998) famously argued, it is often the case that telling the truth is the wrong thing to do. In fact, life would be rather unpleasant, almost unbearable, if people told each other everything that comes to their mind. Just imagine telling everyone you encounter what you really think about them; in many cases, they do not need to know, and often would rather not hear. Something similar applies to the presentation of your self; people do not need, and often do not want to know, everything that goes on in your mind (or your body, for that matter). In other words, authenticity (in the shallow sense, as henceforth used) is not always valuable, and one is not ethically or morally required to be authentic at all times.

The previous considerations suggest that privacy and authenticity are often in some inherent conflict or tension. The moral aspects of this conflict play out differently in the domain of personal presentation and the domain of public discourse. Social media, as currently used, spans both private lives and public-political discourse, and these two raise somewhat different moral concerns. In both cases, the underlying concern is the blurring of the distinction between fact and fiction. But the wider moral implications of this blurring of boundaries are quite different. Let me acknowledge, however, a further complication before we proceed. Social media blurs not only the distinction between fact and fiction; it also blurs the distinction between the personal and the public. Nowhere is it more evident than in the proliferation and tremendous impact of influencers. The whole phenomenon of influencers is based on turning the personal into a commercial or social endeavour, sometimes into commercial business, pure and simple. Therefore, the contrast between personal presentations on social media and public-political discourse spans a wide spectrum; most of the social media use is somewhere in the middle, involving elements of both.

One thing we have learned in the past few years is that social media enables a huge amount of staggeringly false and misleading political speech, directly and indirectly. And we have learned that these falsehoods are not idle. Millions of people seem to be influenced by fake news and incredible falsehoods of all kinds, to the extent that they may have tilted the results of elections and other democratic processes. There are very serious concerns here, and they may force us to rethink some of our established views about free speech and democracy. Perhaps, but this is not the topic of the present chapter. I will leave the discussion of social media and politics to others (see Neil Levy’s chapter ‘Fake News: Rebuilding the Epistemic Landscape’).

Let us return to the presentation of the self in social media. What we seem to have here is a domain of endless possibilities of self-construction, ranging from mild manipulation of reality to outright fiction or deception. One curious aspect of the story of Ms Lisa Li, mentioned at the start, is not so much how she lost many of her followers instantaneously, which she did, but the fact that she has not really lost most of them—far from it. Hundreds of thousands of people kept loyal to her, despite the fact that she turned out to be a rather different person from the glamorous social media persona she had depicted. It seems that many of her followers just did not care. Which would be surprising only if you thought that consumers seek the truth, as if they wanted to follow the real Lisa Li. But evidently that is not what her followers were after; they were seeking to share a dream, a kind of visual fiction, and then, when they come to learn that parts of that fiction are not real, they are not all that surprised or disappointed; fiction, after all, is not supposed to be real. There is nothing morally problematic about the desire to consume fiction, whether on social media or elsewhere. But what if the distinction between fiction and reality gets rather blurred? What if people lose interest in the distinction itself, not caring all that much about whether something they are told or shown purports to be fact or fiction? There is clearly something disturbing about it, but it is not easy to pin down what that is.

The difficulty stems from the possible argument that if you do not care about whether a story is fact or fiction, in essence you are treating it as if it was fiction. A story that is consumed as potentially fiction is treated by the consumer on a par with fiction. And if this is true, then perhaps there is nothing wrong with social media blurring the distinction between fact and fiction, as long as people are, by and large, aware of it. Furthermore, if you think about it, there is nothing new here. Capitalist consumerism is based on aggressive advertising, selling us dreams and fantasies in order to sell us products and services. Instagram influencers sell a constructed image of themselves in order to sell products. It is essentially the same idea. Or, not quite, perhaps. As much as we might want to criticize rampant consumerism and all the advertising industry that keeps it afloat, the advertising industry is what it purports to be, an industry aiming to sell you stuff that you may not really need or did not think you even wanted. One problem that seems to plague the social media domain is yet again, a blurring of the boundaries between what is clearly commercial advertising and what is personal, social, or even political.

But I still have not answered the question of what is morally problematic about the blurring of these distinctions on social media. Perhaps it is a good thing that distinctions between fact and fiction, personal and public, entertainment and consumerism, are getting blurred by social media. Challenging established categories and conceptual divisions is how social changes occur over time; it is often what social movements aim to accomplish. Not all social changes are for the better, of course, but many of them are. Furthermore, if the social media world enhances people’s privacy interests, giving them more control over what aspects of themselves they present to different others, all the better, not so? I am far from sure, however, that moral complacency is warranted. Perhaps many of the categories getting blurred on social media provide new opportunities to people, and empower hitherto underprivileged segments of the population. 6 So there are, quite clearly, some good effects here. But the erosion of the value of truth is not so innocuous. If the interest in truth gets eroded, this erosion is not going to remain confined to the use of social media; it is very likely to pervade personal and public life on a much wider scale. The more socially acceptable it becomes to mix fiction with truth without accountability, the less responsibility people are going to feel for truth in general, both in their personal lives and in their civic engagements. This cannot be a good development.

Let me emphasize, however, that the erosion of truth on social media is very intimately linked to its privacy enhancement aspect; it is not a coincidence that a world in which there is much more privacy, there is less concern for the truth. As I have mentioned, the essence of the right to privacy is the right not to tell the truth, at least not all of it. Control over what aspects of yourself you reveal to others is needed precisely because we have legitimate interests in not being all that forthcoming with revealing aspects of ourselves or our lives to others. Privacy and authenticity are inherently in some conflict or tension.

Social media as a tool against privacy

The picture I have depicted so far has been one-sided; I have focused on the privacy-enhancing aspects of social media. But social media is also used for opposite purposes; it is sometimes used to deliberately undermine someone’s privacy, exposing them to the world in ways they do not want to be perceived. 7 I will focus on cases called ‘doxing’, whereby social media users, often in a group, target an individual or a group of individuals for the purpose of public shaming or, in some extreme cases, even harassment or intimidation. Two main aspects of the medium enable this practice: the availability of a huge amount of information on people online, and the ability to reach a very wide audience at very low cost. 8 Individuals are not the only targets of doxing—sometimes governments are too. Wikileaks is a case in point. But I will bracket these governmental or even corporate targets, and focus on the practice of doxing targeting individuals.

Let us start with a simple story. Suppose you happen to know that your friend is cheating on his wife. You keep quiet for a while, but at some point the friendship turns sour and you decide to tweet about your friend’s infidelity with details and all; you know that your friend, and his wife, and your mutual friends, are all following you on Twitter. I presume that we would think that you had misbehaved; gravely so, perhaps. Even if you had a reason to tell your friend’s spouse about her husband’s infidelity, that is something you should have told her in private; sharing it with many is a deliberate act of shaming. Shaming is an act of humiliation, and as such, pro tanto wrong. Unless there is a very good reason to bring shame on someone, it is wrong on a par with deliberate humiliation, a demeaning speech act, striving deliberately to put someone down. Now, of course, the problem with social media is that it technically allows for public shaming on a very large scale. Someone can find out something embarrassing about you and post it on some social media platform or other, rendering the information public instantaneously. Furthermore, it is often very difficult for the targeted individual to rebut the shaming information, even if it is, actually, false. Once a rumour or an image is out there, it is almost impossible to make it go away.

Doxing, by its very nature, would seem to be a violation of privacy; it is done with the explicit aim of revealing to others information about their target that the individual in question would rather not expose, at least not to the public at large. But it does not necessarily follow that doxing is always an unjustified violation of the target’s right to privacy. This is for two reasons: it might not be a violation of the right to privacy at all, and even if it is, it might be a case of a justified violation of a right. Let me explain briefly both of these points.

J. Thomson (1975 : 307) argued a long time ago, and correctly so in my mind, that nobody can have a right that some truth about them not be known. We cannot have proprietary rights over truths about us or about anything else, for that matter. 9 The right to privacy, I argued (contra Thomson), is there to protect our interest in having a reasonable measure of control over ways in which we present ourselves to others. The protection of this interest requires the securing of a reasonably predictable environment about the flow of information and the likely consequences of our conduct in the relevant types of contexts. On my account of the right to privacy, such a right is violated when somebody manipulates, without adequate justification, the relevant environment in ways that significantly diminish your ability to control what aspects of yourself you reveal to others. One typical case is the following: you assume, and have good reason to assume, that by φ-ing you reveal F to A; that is how things normally work. You can choose, on the basis of this assumption, whether to φ or not. Now somebody would clearly violate your right if he were to manipulate the relevant environment, without your knowledge, making it the case that by doing φ you actually reveal F not only to A but also to B et al., or that you actually reveal not just F but also W to A (and/or to B et al.), which means that you no longer have the right kind of control over what aspects of yourself you reveal to others; your choice is undermined in an obvious way ( Marmor 2015 : 14).

Given this account, it would seem that a case of doxing is typically a violation of one’s right to privacy. If you are the target of doxing, your environment is manipulated by others rendering the information you reveal about yourself, knowingly or unknowingly, spread well beyond its intended or reasonably predicted audience. That is clearly a case in which you lose control over what aspects of yourself you reveal to whom. The difficult or borderline cases are those in which doxing reveals information about someone that is publicly available anyway. Suppose, for example, that you are perceived by your acquaintances as a person of modest means, and yet you buy a very expensive piece of real estate, a transaction that you would rather keep to yourself. In many jurisdictions, ownership of real estate is a matter of public record. (And let us assume that there are good reasons for that.) Somebody can easily look it up and post the information on social media, perhaps to embarrass you. They post something that is a matter of public record, even if, normally, people do not bother to spend their time looking for this kind of information. I suspect that people would have different intuitions about this case. It may not be the right thing to do, for sure, but I am not sure that it amounts to a violation of any right of yours. However, once again, the problem in the social media context is the issue of intensity and scale. Targeted doxing usually involves extensive efforts and considerable investment of time and energy in gathering information that, even if publicly available, is not available without such deliberate and extensive research. 10

The situation here is very similar to the question of privacy in public spaces ( Marmor 2015 : 20–21). 11 When you walk around on Main Street you cannot have a privacy expectation not to be observed by indefinite others. By walking on the street you obviously make yourself observable and there’s nothing problematic about that from a privacy perspective. But suppose somebody is following you with a video camera, recording your movements for a while, and posting that on YouTube. Now it might seem a violation of your right to privacy, even if the recording was done in a public space. Why is that? Presumably, because making yourself observable in a public space is not an invitation, or even consent to, becoming an object of gaze or surveillance. The concern here is about attention and record-keeping. When you take a walk on Main Street, you are perfectly aware of the fact that you have no control over who happens to be there and thus is able to see you; but you also rely on the fact that people’s attention and memory are very limited. You do not expect to have every tiny movement of yours noticed and recorded by others. In other words, consent to public exposure is not unlimited. Voluntarily giving indefinite others the opportunity to see you is not an invitation, or even tacit consent, to gaze at you, and certainly not a consent to record your doings, digitally or otherwise.

But what if expectations actually change, and people come to know that certain public spaces are subject to extensive surveillance? What if we are all well informed, for example, that all the streets in our town are covered with CCTV cameras that record everything everywhere? Would that violate our right to privacy in public spaces? I think that the answer is ‘Yes’ because there is another way in which one’s right to privacy can be violated, that is, by diminishing the space in which we can control what aspect of ourselves we reveal to others to an unacceptably small amount in an important domain of human activity ( Marmor 2015 : 14). Needless to say, determining what counts as a violation of privacy in this respect is bound to be controversial and often difficult to determine. Presumably, there are two main factors in play: the relative importance of the type of activity in question (e.g. walking on ordinary streets versus entering a particular building), and the level of diminished control over concealment or exposure. Either way, we should recognize that people’s right to privacy can be violated even in public spaces ( Véliz 2018 ).

Many cases of doxing based on exposure of data collected from public records involve essentially the same moral issue. Living in a world in which there is a great deal of information about us on public records might not be a problem in and of itself. But becoming the focus of targeted attention based on information-gathering from public records becomes a form of surveillance, quite possibly violating the right to privacy. Such actions diminish, sometimes very considerably, your ability to control what aspects of yourself you reveal to others. And that is so because we can normally expect ordinary others to have limited interest, attention, and resources for digging up information on us that is stored somewhere somehow.

That doxing often, if not quite always, involves a violation of the target’s right to privacy does not necessarily entail that it is never justified, all things considered. Possible moral justifications of rights’ violations are multifarious and greatly depend on circumstances. 12 Sometimes a rights violation is justified when it is required in order to secure a conflicting right that ought to prevail under the circumstances. Sometimes a right may be justly violated in order to secure a common good of greater moral significance. Suppose, for example, that in order to expose a politician’s staggering hypocrisy you need to violate their right to privacy. Still, the exposure might serve the common good and democratic values to an extent that justifies the rights’ violation.

It is a common doctrinal principle in libel laws in many jurisdictions that the more one enjoys a public persona the less protection from libel one can legally expect. Something similar, at least morally speaking, may well apply to protection of privacy. The more you deliberately and voluntarily expose yourself to the public, as it were, perhaps the less concern about your right to privacy you can legitimately expect to have. But this principle, if a principle it is, should have an important caveat: it would be quite unjustified to expose facts about a public persona by violating their right to privacy if the facts disclosed are not related to what makes the person famous. If a politician thrives on gay-bashing and promoting ‘family values’, exposing that the politician is gay himself might be quite justified. But the same would not be true of, say, a famous scientist; if the scientist’s claim to fame has nothing to do with sexuality or anything remotely relevant to that, exposing their sexual preferences against their wish cannot be justified. If the fact you expose about a scientist has something to do with their scientific integrity, that may be justified. Admittedly, the distinction between facts about a public persona that are relevant to their public status and those that are not is sometimes difficult to draw. However, since we are talking about the justified violation of a right here, the justification for violating the right to privacy needs to be fairly robust. Which means that, in cases of doubt, when it is not entirely clear that the disclosure in question is relevant to the person’s public status, the doubt should count in favour of respecting the right to privacy.

Perhaps now you wonder about the case of Ms Li: if her landlord violated her right to privacy, which may well have been the case here, was it a justified violation of her right? Are facts about Ms Li’s sordid living conditions relevant to her claim to fame? My own sense is that the answer is probably ‘Yes’, but I can see that this might be contentious. The more you think that influencers such as Ms Li are selling fiction or fantasy, the less relevant her own life is to the persona she creates on social media.

Social media, big data, and privacy

We now seem to live in a world in which almost everything we do, everywhere we go, and everything we buy, is recordable by some computerized system or other. And much of it, if not most, is actually recorded, aggregated, sorted, and often sold by various systems ( Zuboff 2019 ). Our digital footprint is ubiquitous, and easily utilized by interested parties. That governments may have access to all this information is a serious reason for concern. Governments that have no great respect for democracy and human rights have gained powerful tools that they can use for political oppression, and even democratic and decent regimes may occasionally succumb to the temptation to use such information in ways that violate people’s rights. 13 The serious political hazard with the recording of our digital footprints threaten many of our rights and freedoms, but not necessarily, or even primarily, our right to privacy. Political oppression violates more serious and urgent rights than the right to privacy. There are countries in which people are detained and thrown into jail for things they post on social media; when you find yourself in jail for something you’d posted on Facebook, the violation of your right to privacy is the least of your concerns. Generally speaking, the dangers of government surveillance go far beyond threats to our privacy; they threaten our basic civil and human rights.

Political oppression, however, is not the topic of this chapter. I will therefore bracket the dangers of big (and small) data collection by governments and focus on the private market. 14 One major development of the digital age is the commodification of our consumer profiles. There is a huge amount of digital footprint we leave on a daily basis about our consumer behaviour; things we buy, places we visit, interests we express, movies we stream, even things we search on Google, indicate our tastes and desires, and our willingness to pay for this or that. The ability of computers to store and analyse this information renders our consumer profiles a commodity that can be bought and sold, something that has a market value. Mostly, I presume, it is valuable to corporations for marketing purposes, targeting their marketing efforts in ways that are tailored to our tastes and preferences. All this digital analysis of consumer profiles is not done by people; there is nobody sitting there in front of a computer, thinking, ‘Oh, I see that Professor Marmor likes Borsalino hats. Let’s send him some ads about the latest models.’ Targeted advertising is automated. Our consumer profiles are generated and commodified on a huge scale, and analysed by complex algorithms that handle hundreds of millions of data points. This system makes the concern about privacy rather tricky here.

Let me focus exclusively, however, on the market use of people’s digital footprints for commercial purposes. Is targeted advertising, based on our digital footprints, a threat to consumers’ privacy? For the sake of simplicity, let us assume that all this data collected on our consumer profiles is done without our ex ante consent. 15 So here is a simple and fairly standard example: you post on your Facebook page that you are considering a trip to Paris this summer, intending, for some reason or other, to share this information with your friends. Soon enough (very soon, in my experience) you start getting advertisements on your Facebook about hotels in Paris, flights to Paris, etc. For many people, there is something spooky about this; it feels as if someone is watching your Facebook posts and sending you ads in response. But as I mentioned, that is not the case, and besides, this feeling of spookiness is not shared by all. Many people are perfectly fine with getting these targeted ads; they do not care that some fancy algorithm enables advertisers to do that. Furthermore, and this is a crucial factor that is sometimes forgotten, there is a commercial transaction here in the background: we get to use the social media tools offered by these corporations without pay, in exchange for subjection to targeted advertising. It is a contract, and the contract, on its face, does not seem to be obviously unfair or exploitative. 16

As in many cases of rapid technological developments, it may have taken a while for most of us, users of social media and other internet platforms, to realize that our digital footprints have become a merchandise in themselves, bought and sold by companies for commercial purposes. But I think that now we know this to be the case, and I think that most people understand that the commercial value of our consumer behaviour is priced in the services we get, its market value paying for our free use of social media and internet tools. In principle, the situation here is no different from other, more mundane contexts, in which the market value of captive audience is priced in the products we buy. When you go to the cinema to watch a movie, you are subjected to about twenty minutes of ‘previews’ and other ads; if cinemas had to forgo this practice, presumably our movie tickets would end up costing us more.

But now, you may wonder, where is the threat to privacy in this commodification of our consumer profiles? There are aspects of this new world of commodification of our habits that are certainly troubling; the fact that some data collected on one’s consumer habits is bought and sold by corporations for commercial use might raise concerns about the overreach of capitalism; targeted advertising surely augments the concerns we have about commercial advertising generally, structuring our preferences and desires in questionable ways, but none of it seems to be a threat to privacy. My ability to control ways in which I present myself to different others is not undermined by these commercial practices. Of course, there might be some threats to privacy on the margins. If you start getting ads for a product you do not want others to know about, then if somebody happens to see your computer screen with those ads displayed, they may get to know something about you that you would have rather kept to yourself. But these are marginal cases, and they may come up in countless other contexts. My guess is that most people are concerned about the potential for the abuse of information that is commercially transacted; they fear that it might fall into the wrong hands. Perhaps the government might get hold of your habits or whereabouts in ways that might put you in a vulnerable position; or perhaps rouge agents may use this information to hack into your assets and steal your possessions. These are serious concerns, for sure, but, as I have tried to argue here all along, they are not concerns about the right to privacy. Strange as it may sound, the commodification of our consumer profiles and even more generally, big data collection, threaten many of our rights and freedoms, but the right to privacy is not the primary concern.

Acknowledgements

I am indebted to Alicia Patterson for research assistance on this chapter, and to Carissa Véliz for helpful comments.

The story about Lisa Li has been widely reported by news outlets, e.g. https://www.bbc.com/news/world-asia-china-49830855 , accessed 10 August 2021. There are many other similar cases, such as a vegan influencer caught eating meat, or a middle-aged YouTube celebrity who used an image-modifying camera to make her appear much younger than she was. These are simpler cases of outright deceit. I am using the example of Lisa Li, however, precisely because it is a little more ambiguous and complex.

For detailed accounts, see, e.g. Nissenbaum (2010 : chs 1–3) and Zuboff (2019) .

Following a long Lockean tradition, many philosophers assume that we have property rights in ourselves. Others find the idea of self-ownership fraught with difficulties, perhaps even incoherent. However, delving into this philosophical morass would be far beyond the scope of this chapter, and not quite needed. Even those who find the idea of self-ownership appealing would still want to maintain a distinction between the right to privacy and the right to property. A notable exception is Thompson (1975) . I responded to Thomson’s argument in Marmor (2015) .

See, e.g. Cocking and Van Den Hoven (2018 : ch. 3). For a more sceptical take on this view, see Marwick and Boyd (2011) , who argue that social media makes it difficult for users to understand and navigate social boundaries. Notice, however, that the right to privacy, on my account, is a control right; that does not mean, of course, that people necessarily exercise their control judiciously or wisely. The fact that many tend to post things on social media that, upon reflection, they should not have revealed, or that they come to regret, does count against the fact that they exercise their right.

Up to a point, it would seem. Researchers found that there is a correlation between the time people (especially teenagers) spend on Facebook and depression. One speculation is that people do not quite internalize the fact that the rosy picture of others’ lives they see on social media is actually constructed, and thus feel demoralized or depressed by the comparison to their own humble existence; see, e.g. Steers et al. (2014) .

The tremendous proliferation of influencers would seem to attest to the fact that countless opportunities arise here, often for people who would otherwise have much more limited options. But the reality is slightly more complex; see Duffy (2017) .

The most vulgar and unfortunately prevalent example is the posting of nude photos or videos of women (mostly) without their consent, on dubious porn sites and other internet outlets. These are obvious and blatant violations of privacy that ought to be criminalized and prosecuted. (As with everything else, there are some borderline cases, of course, when there was some qualified consent but the terms of it are allegedly breached or abused; those cases are more complicated.)

For a detailed account of doxing, and its different types and practices, see Douglas (2016) .

I am aware of the fact that Thomson’s thesis is controversial, but I defended this particular view in some detail in Marmor (2015 : 4–6).

For an excellent account of the considerations involved in such cases, see Rumbold and Wilson (2019) . On their account, the question of whether you intend to make some information public and accessible to others is of crucial importance to the question of whether your right to privacy has been violated or not. I am slightly more sceptical about the role of intention here; there might be cases in which even if you did not intend to allow people to have access to some information available on you online, you should have known better than to rely on its concealment. Generally, however, I am largely in agreement with their account.

For a somewhat different account of the right to privacy in public spaces, see Véliz (2018) .

Some philosophers use the word ‘violation’ of a right only when it is not justified, calling justified violations ‘infringement’ of a right. There is no uniformity of usage in the literature, however, and I will not adhere to this terminological distinction. The idea itself is clear enough, and as old as the literature on rights generally. With the exception of Kant, perhaps, no one argues that rights have absolute normative force.

See, e.g. Richards (2013) .

The separation is somewhat artificial, of course, since one of the dangers of data collected by private corporations is that governments can force them to hand over their data.

There are now some jurisdictions that strive to change that by law; California recently enacted a law (California Consumer Privacy Act, 2019) requiring retailers to seek customers’ explicit consent for selling their consumer profiles to others. How much of an actual change in the commodification of consumer profiles this will bring about remains to be seen.

I am talking about the principle here, not the details or the legal aspects of it. Many lawyers have reservations about the lack of transparency in such contracts and about the fact that most consumers are unaware of their contents. See, e.g. Hoofnagle and Whittington (2014) .

Cocking, Dean , and Van Den Hoven, Jeroen ( 2018 ), Evil Online (Oxford: Wiley Blackwell).

Google Scholar

Google Preview

Douglas, David ( 2016 ), ‘ Doxing: A Conceptual Analysis ’, Ethics & Information Technology 18, 199.

Duffy, Brooke E. ( 2017 ), (Not) Getting Paid for What you Love: Gender, Social Media, and Aspirational Work (New Haven, CT: Yale University Press).

Hoofnagle, Chris J. , and Whittington, Jan ( 2014 ), ‘ Accounting for the Costs of the Internet’s Most Popular Price ’, UCLA Law Review 61, 606.

Marmor, Andrei ( 2015 ), ‘ What is the Right to Privacy? ’, Philosophy & Public Affairs 43, 1.

Marwick, Alice E. , and Boyd, Dana ( 2011 ), ‘ I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience ’, New Media & Society 13, 114.

Nagel, Thomas ( 1998 ), ‘ Concealment and Exposure ’, 27 Philosophy & Public Affairs 13, 3.

Nissenbaum, Helen ( 2010 ), Privacy in Context (Redwood City, CA: Stanford University Press).

Richards, Neil ( 2013 ), ‘ The Dangers of Surveillance ’, Harvard Law Review 126, 1934.

Rumbold, Benedict , and Wilson, James ( 2019 ), ‘ Privacy Rights and Public Information ’, The Journal of Political Philosophy 27, 3.

Steers, Mai-Ly , Wickham, Robert , and Acitelli, Linda ( 2014 ), ‘ Seeing Everyone Else’s Highlight Reels: How Facebook Usage Is Linked to Depressive Symptoms ’, Journal of Social and Clinical Psychology 33, 701.

Thomson, Judith J. ( 1975 ), ‘ The Right to Privacy ’, Philosophy & Public Affairs 4, 295.

Véliz, Carissa ( 2018 ), ‘In the Privacy of Our Streets’, in Bryce Clayton Newell , Tjerk Timan , and Bert-Jaap Koops , eds, Surveillance, Privacy and Public Space (London: Routledge), 16.

Zuboff, Shoshana ( 2019 ), The Age of Surveillance Capitalism (London: Profile Books).

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Social Media and Privacy

  • Open Access
  • First Online: 09 February 2022

Cite this chapter

You have full access to this open access chapter

essay on does social media violate our privacy

  • Xinru Page 7 ,
  • Sara Berrios 7 ,
  • Daricia Wilkinson 8 &
  • Pamela J. Wisniewski 9  

17k Accesses

4 Citations

4 Altmetric

With the popularity of social media, researchers and designers must consider a wide variety of privacy concerns while optimizing for meaningful social interactions and connection. While much of the privacy literature has focused on information disclosures, the interpersonal dynamics associated with being on social media make it important for us to look beyond informational privacy concerns to view privacy as a form of interpersonal boundary regulation. In other words, attaining the right level of privacy on social media is a process of negotiating how much, how little, or when we desire to interact with others, as well as the types of information we choose to share with them or allow them to share about us. We propose a framework for how researchers and practitioners can think about privacy as a form of interpersonal boundary regulation on social media by introducing five boundary types (i.e., relational, network, territorial, disclosure, and interactional) social media users manage. We conclude by providing tools for assessing privacy concerns in social media, as well as noting several challenges that must be overcome to help people to engage more fully and stay on social media.

You have full access to this open access chapter,  Download chapter PDF

Similar content being viewed by others

essay on does social media violate our privacy

Privacy and Empowerment in Connective Media

essay on does social media violate our privacy

Privacy in Social Information Access

essay on does social media violate our privacy

The Socio-economic Impacts of Social Media Privacy and Security Challenges

1 introduction.

The way people communicate with one another in the twenty-first century has evolved rapidly. In the 1990s, if someone wanted to share a “how-to” video tutorial within their social networks, the dissemination options would be limited (e.g., email, floppy disk, or possibly a writeable compact disc). Now, social media platforms, such as TikTok, provide professional grade video editing and sharing capabilities that give users the potential to both create and disseminate such content to thousands of viewers within a matter of minutes. As such, social media has steadily become an integral component for how people capture aspects of their physical lives and share them with others. Social media platforms have gradually altered the way many people live [ 1 ], learn [ 2 , 3 ], and maintain relationships with others [ 4 ].

Carr and Hayes define social media as “Internet-based channels that allow users to opportunistically interact and selectively self-present, either in real time or asynchronously, with both broad and narrow audiences who derive value from user-generated content and the perception of interaction with others” [ 5 ]. Social media platforms offer new avenues for expressing oneself, experiences, and emotions with broader online communities via posts, tweets, shares, likes, and reviews. People use these platforms to talk about major milestones that bring happiness (e.g., graduation, marriage, pregnancy announcements), but they also use social media as an outlet to express grief and challenges, and to cope with crises [ 6 , 7 , 8 ]. Many scholars have highlighted the host of positive outcomes from interpersonal interactions on social media including social capital, self-esteem, and personal well-being [ 9 , 10 , 11 , 12 ]. Likewise, researchers have also shed light on the increased concerns over unethical data collection and privacy abuses [ 13 , 14 ].

This chapter highlights the privacy issues that must be addressed in the context of social media and provides guidance on how to study and design for social media privacy. We first provide an overview of the history of social media and its usage. Next, we highlight common social media privacy concerns that have arisen over the years. We also point out how scholars have identified and sought to predict privacy behavior, but many efforts have failed to adequately account for individual differences. By reconceptualizing privacy in social media as a boundary regulation, we can explain these gaps from previous one-size-fits-all approaches and provide tools for measuring and studying privacy violations. Finally, we conclude with a word of caution about the consequences of ignoring privacy concerns on social media.

2 A Brief History of Social Media

Section highlights.

Social media use has quickly increased over the past decade and plays a key role in social, professional, and even civic realms. The rise of social media has led to “networked individualism.”

This enables people to access a wider variety of specialized relationships , making it more likely they can meet a variety of needs. It also allows people to project their voice to a wider audience.

However, people have more frequent turnover in their social networks , and it takes much more effort to maintain social relations and discern (mis)information and intention behind communication.

The initial popularity of social media harkened back to the historical rise of social network sites (SNSs). The canonical definition of SNSs is attributed to Boyd and Ellison [ 15 ] who differentiate SNSs from other forms of computer-mediated communication. According to Boyd and Ellison, SNS consists of (1) profiles representing users and (2) explicit connections between these profiles that can be traversed and interacted with. A social networking profile is a self-constructed digital representation of oneself and one’s social relationships. The content of these profiles varies by platform from profile pictures to personal information such as interests, demographics, and contact information. Visibility also varies by platform and often users have some control over who can see their profile (e.g., everyone or “friends”). Most SNSs also provide a way to leave messages on another’s profile, such as posting to someone’s timeline on Facebook or sending a mention or direct message to someone on Twitter.

Public interest and research initially focused on a small subset of SNSs (e.g., Friendster [ 16 ] and MySpace [ 17 , 18 , 19 ]), but the past decade has seen the proliferation of a much broader range of social networking technologies, as well as an evolution of SNSs into what Kane et al. term social media networks [ 20 ]. This extended definition emphasizes the reach of social media content beyond a single platform. It acknowledges how the boundedness of SNSs has become blurred as platform functionality that was once contained in a single platform, such as “likes,” are now integrated across other websites, third parties, and mobile apps.

Over the past decade, SNSs and social media networks have quickly become embedded in many facets of personal, professional, and social life. In that time, these platforms became more commonly known as “social media.” In the USA, only 5% of adults used social media in 2005. By 2011, half of the US adult population was using social media, and 72% were social users by 2019 [ 21 ]. MySpace and Facebook dominated SNS research about a decade ago, but now other social media platforms, such as YouTube, Instagram, Snapchat, Twitter, Kik, TikTok, and others, are popular choices among social media users. The intensity of use also has drastically increased. For example, half of Facebook users log on several times a day, and three-quarters of Facebook users are active on the platform at least daily [ 21 ]. Worldwide, Facebook alone has 1.59 billion users who use it on a daily basis and 2.41 billion using it at least monthly [ 22 ]. About half of the users of other popular platforms such as Snapchat, Instagram, Twitter, and YouTube also report visiting those sites daily. Around the world, there are 4.2 billion users who spend a cumulative 10 billion hours a day on social networking sites [ 23 ]. However, different social networking sites are dominant in different cultures. For example, the most popular social media in China, WeChat (inc. Wēixìn 微信), has 1.213 billion monthly users [ 23 ].

While SNS profiles started as a user-crafted representation of an individual user, these profiles now also often consist of information that is passively collected, aggregated, and filtered in ways that are ambiguous to the user. This passively collected information can include data accessed through other avenues (e.g., search engines, third-party apps) beyond the platform itself [ 24 ]. Many people fail to realize that their information is being stored and used elsewhere. Compared to tracking on the web, social media platforms have access to a plethora of rich data and fine-grained personally identifiable information (PII) which could be used to make inferences about users’ behavior, socioeconomic status, and even their political leanings [ 25 ]. While online tracking might be valuable for social media companies to better understand how to target their consumers and personalize social media features to users’ preferences, the lack of transparency regarding what and how data is collected has in more recent years led to heightened privacy concerns and skepticism around how social media platforms are using personal data [ 26 , 27 , 28 ]. This has, in turn, contributed to a loss of trust and changes in how people interact (or not) on social media, leading some users to abandon certain platforms altogether [ 26 , 29 ] or to seek alternative social media platforms that are more privacy focused.

For example, WhatsApp, a popular messaging app, updated its privacy policy to allow its parent company, Facebook, and its subsidiaries to collect WhatsApp data [ 30 ]. Users were given the option to accept the terms or lose access to the app. Shortly after, WhatsApp rival Signal reported 7.5 million installs globally over 4 days. Recent and multiple social media data breaches have heightened people’s awareness around potential inferences that could be made about them and the danger in sensitive privacy breaches. Considering the invasive nature of such practices, both consumers and companies are increasingly acknowledging the importance of privacy, control, and transparency in social media [ 31 ]. Similarly, as researchers and practitioners, we must acknowledge the importance of privacy on social media and design for the complex challenges associated with networked privacy. These types of intrusions and data privacy issues are akin to the informational privacy issues that have been investigated in the context of e-commerce, websites, and online tracking (see Chap. 9 ).

While early research into social media and privacy largely focused on these types of concerns, researchers have uncovered how the social dynamics surrounding social media have led to a broader array of social privacy issues that shape people’s adoption of platforms and their usage behaviors. Rainie and Wellman explain how the rise of social technologies, combined with ubiquitous Internet and mobile access, has led to the rise of “networked individualism” [ 32 ]. People have access to a wider variety of relationships than they previously did offline in a geographically and time-bound world. These new opportunities make it more likely that people can foster relationships that meet their individual needs for havens (support and belonging), bandages (coping), safety nets (protect from crisis), and social capital (ability to survive and thrive through situation changes). Additionally, social media users can project their voice to an extended audience, including many weak ties (e.g., acquaintances and strangers). This enables individuals to meet their social, emotional, and economic needs by drawing on a myriad of specialized relationships (different individuals each particularly knowledgeable in a specific domain such as economics, politics, sports, caretaking). In this way, individuals are increasingly networked or embedded within multiple communities that serve their interests and needs.

Inversely, networked individualism has also made people less likely to have a single “home” community, dealing with more frequent turnover and change in their social networks. Rainie and Wellman describe how people’s social routines are different from previous generations that were more geographically bound – today, only 10% of people’s significant ties are their neighbors [ 32 ]. As such, researchers have questioned and studied the extent to which people can meaningfully maintain interpersonal relationships on social media. The upper limit for doing so has been estimated at 150 connections or “friends” [ 33 ], but social media connections often well exceed this number. With such large networks, it also takes users much more effort to distinguish (mis)information, when communication is intended for the user, and the intent behind that communication. The technical affordances of social media can also help or hinder their (in)ability to capture the nuances of the various relationships in their social network. On many social media platforms, relationships are flattened into friends and followers, making them homogenous and lacking differentiation between, for instance, casual acquaintance and trusted confidant [ 16 , 34 ]. These characteristics of social media lead to a host of social privacy issues which are crucial to address. In the next section, we summarize some of the key privacy challenges that arise due to the unique characteristics of social media.

3 Privacy Challenges in Social Media

Information disclosure privacy issues have been a dominant focus in online technologies and the primary focus for social media. It focuses on access to data and defining public vs. private disclosures . It emphasizes user control over who sees what.

With so many people from different social circles able to access a user’s social media content, the issues of context collapse occur. Users may post to an imagined audience rather than realizing that people from multiple social contexts are privy to the same information.

The issues of self-presentation jump to the foreground in social media. Being able to manage impressions is a part of privacy management.

The social nature of social media also introduces the issues of controlling access to oneself , both in terms of availability and physical access.

Despite all of these privacy concerns, there is a noted privacy paradox between what people say they are concerned about and their resulting behaviors online.

Early focus of social media privacy research was focused on helping individuals meet their privacy needs in light of four key challenges: (1) information disclosure, (2) context collapse, (3) reputation management, and (4) access to oneself. This section gives an overview of these privacy challenges and how research sought to overcome them. The remainder of this chapter shows how the research has moved beyond focusing on the individual when it comes to social media and privacy; rather, social media privacy has been reconceptualized as a dynamic process of interpersonal boundary regulation between individuals and groups.

3.1 Information Disclosure/Control over Who Sees What

A commonality among early social media privacy research is that the focus has been on information privacy and self-disclosure [ 35 ]. Self-disclosure is the information a person chooses to share with other people or websites, such as posting a status update on social media. Information privacy breaches occur when a website and/or person leaks private information about a user, sometimes unintentionally. Many studies have focused on informational privacy and on sharing information with, or withholding it from, the appropriate people [ 36 , 37 , 38 ] on social media. Privacy settings related to self-disclosure have also been studied in detail [ 39 , 40 , 41 ]. Generally, social media platforms help users control self-disclosure in two ways. First is the level of granularity or type of information that one can share with others. Facebook is the most complex, allowing users to disclose and control more granular information for profile categories such as bio, website, email addresses, and at least eight other categories at the time of writing this chapter. Others have fewer information groupings, which make user profiles chunkier, and thus self-disclosure boundaries less granular. The second dimension is one’s access level permissions, or with whom one can share personal information. The most popular social media platforms err on the side of sharing more information to more people by allowing users to give access to categories such as “Everyone,” “All Users,” or “Public.” Similarly, many social media platforms give the option for access for “friends” or “followers” only.

Many researchers have highlighted how disclosures can be shared more widely than intended. Tufekci examined disclosure mechanisms used by college students on MySpace and Facebook to manage the boundary between private and public. Findings suggest that students are more likely to adjust profile visibility rather than limiting their disclosures [ 42 ]. Other research points out how users may not want their posts to remain online indefinitely, but most social media platforms default to keeping past posts visible unless the user specifies otherwise [ 43 ]. Even when the platform offers ways to limit post sharing, there are often intentional and unintentional ways this content is shared that negates the users’ wishes. For example, Twitter is a popular social media platform where users can choose to have their tweets available only to their followers. However, millions of private tweets have been retweeted, exposing private information to the public [ 44 ]. Even platforms like Snapchat, which make posts ephemeral by default, are susceptible to people taking screenshots of a snap and distributing through other channels. Thus, as social media companies continue to develop social media platforms, they should consider how to protect users from information disclosure and teach people to practice privacy protective habits.

Although some users adjust their privacy settings to limit information disclosures, they may be unaware of third-party sites that can still access their information. Scholars have emphasized the importance of educating users on the secondary use of their data, such as when third-party software takes information from their profiles [ 45 ]. Data surveillance continues to expand, and the business model of social media corporations tends to favor getting more information about users, which makes it difficult for users that want to control their disclosure [ 46 ]. Third-party apps can also access information about social media users’ connections without consent of the person whose information is being stored [ 47 ].

3.2 Unique Considerations for Managing Disclosures Within Social Media

As mentioned earlier, social media can expand a person’s network, but as that network expands and diversifies, users have less control over how their personal information is shared with others. Two unique privacy considerations for social media that arise from this tension are context collapse and imagined audiences, which we describe in more detail in the subsections below. For example, as Facebook has become a social gathering place for adults, one’s “friends” may include family members, coworkers, colleagues, and acquaintances all in one virtual social sphere. Social media users may want to share information with these groups but are concerned about which audiences are appropriate for sharing what types of information. This is because these various social spheres that intersect on Facebook may not intersect as readily in the physical world (e.g., college buddies versus coworkers) [ 48 ]. These distinct social circles are brought together into one space due to social media. This concept is referred to as “context collapse” since a user’s audience is no longer limited to one context (e.g., home, work, school) [ 15 , 49 , 50 ]. We highlight research on the phenomenon of the privacy paradox and explain how context collapse and imagined audiences may help explain the apparent disconnect between users’ stated privacy concerns and their actual privacy behavior.

Context Collapse

Nuanced differences between one’s relationships are not fully represented on social media. While real-life relationships are notorious for being complex, one of the biggest criticisms of social media platforms is that they often simplify relationships to a “binary” [ 51 ] or “monolithic” [ 52 ] dimension of either friend or not friend. Many platforms just have one type of relationship such as a “friend,” and all relationships are treated the same. Once a “friend” has been added to one’s network, maintaining appropriate levels of social interactions in light of one’s relationship context with this individual (and the many others within one’s network) becomes even more problematic [ 53 ]. Since each friend may have different and, at times, mutually exclusive expectations, acting accordingly within a single space has become a challenge. As Boyd points out, for instance, teenagers cannot be simultaneously cool to their friends and to their parents [ 53 ]. Due to this collapsed context of relationships within social media, acquaintances, family, friends, coworkers, and significant others all have the same level of access to a social media user once added to one’s network – unless appropriately managed.

Research reveals that the way people manage context collapses varies. Working professionals might deal with context collapse by limiting posts containing personal information, creating different accounts, and avoiding friending those they worked with [ 54 ]. As another example, many adolescents manage context collapse by keeping their family members separate from their personal accounts [ 55 ]. Other mechanisms for managing context collapse include access-level permission to request friendship, denying friend requests, and unfriending. While there is limited support for manually assigning different privileges to each friend, the default is to start out the same and many users never change those defaults.

Privacy incidents resulting from mixing work and social media show the importance of why context collapse must be addressed. Context collapse has been shown to negatively affect those seeking employment [ 56 ], as well as endangering those who are employed. For example, a teacher in Massachusetts lost her job because she did not realize her Facebook posts were public to those who were not her friends; her complaints about parents of students getting her sick led to her getting fired from her job [ 57 ]. Many others have shared anecdotes about being fired after controversial Facebook and Twitter posts [ 58 , 59 ]. Even celebrities who live in the public eye can suffer from context collapse [ 60 , 61 ]. Kim Kardashian, for example, received intense criticism from Internet fans when she posted a photo on social media of her daughter using a cellphone and wearing makeup while Kim was getting ready for hair and wardrobe [ 62 ]. Many online users criticized her parenting style for not limiting screen time and Kim subsequently shared a photo of a stack of books that the kids have access to while she works.

Nevertheless, context collapse can also increase bridging social capital, which is the potential social benefit that can come through having ties to a wider audience. Context collapse enables this to occur by allowing people to increase their connections to weak ties and creating serendipitous situations by sharing with people beyond whom one would normally share [ 60 ]. For example, job hunters may increase their chances of finding a job by using social media to network and connect with those they would not normally be associated with on a daily basis. Getting out a message or spreading the word can also be accomplished more easily. For instance, finding people to contribute to natural disaster funds can be effective on social media because multiple contexts can be easily reached from one account [ 63 ]. In addition to managing context collapse, social media users also have to anticipate whether they are sharing disclosures with their intended audiences.

Imagined Audiences

The disconnect between the real audience and the imagined audience on social media poses privacy risks. Understanding who can see what content, how, when, and where is key to deciding what content to share and under what circumstances. Yet, research has consistently demonstrated how users do not accurately anticipate who can potentially see their posts. This manifests as wrongly anticipating that a certain person can see content (when they cannot), as well as not realizing when another person can access posted content. Users have an “imagined audience” [ 64 , 65 ] to whom they are posting their content, but it often does not match the actual audience viewing the user’s content. Social media users typically imagine that the audience for their social media posts are like-minded people, such as family or close friends [ 65 ]. Sometimes, online users think of specific people or groups when creating content such as a daughter, coworkers, people who need cleaning tips, or even one’s deceased father [ 65 ]. Despite these imagined audiences, privacy settings may be set so that many more people can see these posts (acquaintances, strangers, etc.). While users do tend to limit who sees their profile to a defined audience [ 44 , 66 , 67 ], they still tend to believe their posts are more private than they actually are [ 49 , 68 ].

Some users adopt privacy management strategies to counter potential mismatch in audience. Vitak identified several privacy management tactics users employ to disclose information to a limited audience [ 69 ]:

Network-based . Social media users decide who to friend or follow, therefore filtering their network of people. Some Facebook users avoid friending people they do not know. Others set friends’ profiles to “hidden,” so that they do not have to see their posts, but avoid the negative connotations associated with “unfriending.”

Platform-based . Some users choose to use the social media sites’ privacy settings to control who sees their posts. A common approach on Facebook is to change the setting to be “friends only,” so that only a user’s friends may see their posts.

Content-based . These users control their privacy by being careful about the information they post. If they knew that an employer could see their posts, then they would avoid posting when they were at work.

Profile-based . A less commonly used approach is to create multiple accounts (on a single platform or across platforms). For example, a professional, personal, and fun account.

As another example, teenagers often navigate public platforms by posting messages that parents or others would not understand their true meaning. For instance, by posting a song lyric or quote that is only recognized by specific individuals as a reference to a specific movie scene or ironic message, they therefore creatively limit their audience [ 49 , 70 ]. Others manage their audience by using more self-limiting privacy tactics like self-censorship [ 70 ], choosing just to not post something they were considering in the first place. These various tactics allow users to control who can see what on social media in different ways.

3.3 Reputation Management Through Self-Presentation

Technology-mediated interactions have led to new ways of managing how we present ourselves to different groups of friends (e.g., using different profiles on the same platform based on the audience) [ 71 ]. Being able to control the way we come across to others can be a challenging privacy problem that social media users must learn to navigate. Features to limit audience can also help with managing self-presentation. Nonetheless, reputation or impression management is not just about avoiding posts or limiting access to content. Posting more content, such as selfies, is another approach used to control the way others perceive a user [ 72 ]. In this case, it is important to present the content that helps convey a certain image of oneself. Research has revealed that those who engage more in impression management tend to have more online friends and disclose more personal information [ 73 ]. Those who feel online disclosures could leave them vulnerable to negativity, such as individuals who identify as LGBTQ+, have also been found to put an emphasis on impression management in order to navigate their online presence [ 74 ]. However, studies still show that users have anxieties around not having control over how they are presented [ 75 ]. Social media users worry not only about what they post, but are concerned about how others’ postings will reflect on them [ 42 ].

Another dimension that affects impression management attitudes is how social media platforms vary in their policies on whether user profiles must be consistent with their offline identities. Facebook’s real name policy, for instance, requires that people use their real name and represent themselves as one person, corresponding to their offline identities. Research confirms that online profiles actually do reflect users’ authentic personalities [ 76 ]. However, some platforms more easily facilitate identity exploration and have evolved norms encouraging it. For example, Finsta accounts popped up on Instagram a few years after the company started. These accounts are “Fake Instagram” accounts often sharing content that the user does not want to associate with their more public identity, allowing for more identity exploration. This may have arisen from the social norm that has evolved where Instagram users often feel like they need to present an ideal self. Scholars have observed such pressure on Instagram more than on other platforms like Snapchat [ 77 ]. While the ability to craft an online image separate from one’s offline identity may be more prevalent on platforms like Instagram, certain types of social media such as location-sharing social networks are deeply tied to one’s offline self, sharing actual physical location of its users. Users of Foursquare, a popular location-sharing app, have leveraged this tight coupling for impression management. Scholars have observed that users try to impress their friends or family members about the places they spend their time while skipping “check-in” at places like McDonald’s or work for fear of appearing boring or unimpressive [ 78 ].

Regardless of how tightly one’s online presence corresponds with their offline identity, concerns about self-presentation can arise. For example, users may lie about their location on location-sharing platforms as an impression management tactic and have concerns about harming their relationships with others [ 79 ]. On the other hand, Finstas are meant to help with self-presentation by hiding one’s true identity. Ironically, the content posted may be even more representative of the user’s attitudes and activities than the idealized images on one’s public-facing account. These contrasting examples illustrate how self-presentation concerns are complicated.

What further complicates reputation management is that social media content is shared and consumed by a group of people and not just individuals or dyads. Thus, self-presentation is not only controlled by the individual, but by others who might post pictures and/or tag that individual. Even when friends/followers do not directly post about the user, their actions can reflect on the user just by virtue of being connected with them. The issues of co-owned data and how to negotiate disclosure rules are a key area of privacy research on the rise. We refer you to Chap. 6 , which goes in-depth on this topic.

3.4 Access to Oneself

A final privacy challenge many social media users encounter is controlling accessibility others have to them. Some social media platforms automatically display when someone is online, which may invite interaction whether users want to be accessible or not. Controlling access to oneself is not as straightforward as limiting or blocking certain people’s access. For instance, studies have also shown that social pressures influence individuals to accept friend requests from “weak ties” as well as true friends [ 53 , 80 ]. As a result, the social dynamics on social media are becoming more complex, creating social anxiety and drama for many social media users [ 52 , 53 , 80 ]. Although a user may want to control who can interact with him or her, they may be worried about how using privacy features such as “blocking” other accounts may send the wrong signal to others and hurt their relationships [ 81 ]. In fact, an online social norm called “hyperfriending” [ 82 ] has developed where only 25% of a user’s online connections represent true friendship [ 83 ]. This may undermine the privacy individuals wished they had over who interacts with them on their various accounts. Due to social norms or etiquette, users may feel compelled to interact with others online [ 84 ]. Even if users do not feel like they need to interact, they can sometimes get annoyed or overwhelmed by seeing too much information from others [ 85 ]. Their mental state is being bombarded by an overload of information, and they may feel their attention is being captured.

Many social media sites now include location-sharing features to be able to tell people where they are by checking in to various locations, tag photos or posts, or even share location in real time. Therefore, privacy issues may also arise when sharing one’s location on social media and receiving undesirable attention. Studies point out user concerns about how others may use knowledge of that location to reach out and ask to meet up, or even to physically go find the person [ 86 ]. In fact, research has found that people may not be as concerned about the private nature of disclosing location as they are concerned for disturbing others or being disturbed oneself as a result of location sharing [ 87 ]. This makes sense given that analysis of mobile phone conversations reveals that describing one’s location plays a big role in signaling availability and creating social awareness [ 87 , 88 ].

Some scholars focus on the potential harm that may come because of sharing their location. Tsai et al. surveyed people about perceived risks and found that fear of potential stalkers is one of the biggest barriers to adopting location-sharing services [ 89 ]. Nevertheless, studies have also found that many individuals believe that the benefits of using location sharing outweigh the hypothetical costs. Foursquare users have expressed fears that strangers could use the application to stalk them [ 78 ]. These concerns may explain why users share their location more often with close relationships [ 37 ].

Geotagging is another area of privacy concern for online users. Geotagging is when media (photo, website, QR codes) contain metadata with geographical information. More often the information is longitudinal and latitudinal coordinates, but sometimes even time stamps are attached to photos people post. This poses a threat to individuals that post online without realizing that their photos can reveal sensitive information. For example, one study assessed Craigslist postings and demonstrated how they could extract location and hours a person would likely be home based on a photo the individual listed [ 90 ]. The study even pinpointed the exact home address of a celebrity TV host based on their posted Twitter photos. Researchers point out how many users are unaware that their physical safety is at risk when they post photos of themselves or indicate they are on vacation [ 22 , 90 , 91 ]. Doing so may make them easy targets for robbers or stalkers to know when and where to find them.

3.5 Privacy Paradox

While researchers have investigated these various privacy attitudes, perceptions, and behaviors, the privacy paradox (where behavior does not match with stated privacy concerns) has been especially salient on social media [ 92 , 93 , 94 , 95 , 96 , 97 ]. As a result, much research focuses on understanding the decision-making process behind self-disclosure [ 98 ]. Scholars that view disclosure as a result of weighing the costs and the benefits of disclosing information use the term “privacy calculus” to characterize this process [ 99 ]. Other research draws on the theory of bounded rationality to explain how people’s actions are not fully rational [ 100 ]. They are often guided by heuristic cues which do not necessarily lead them to make the best privacy decisions [ 101 ]. Indeed, a large body of literature has tried to dispel or explain the privacy paradox [ 94 , 102 , 103 ].

4 Reconceptualizing Social Media Privacy as Boundary Regulation

By reconceptualizing privacy in social media as a boundary regulation , we can see that the seeming paradox in privacy is actually a balance between being too open or disclosing too much and being too inaccessible or disclosing too little. The latter can result in social isolation which is privacy regulation gone wrong.

In the context of social media, there are five different types of privacy boundaries that should be considered.

People use various methods of coping with privacy violations , many not tied to disclosing less information.

Drawing from Altman’s theories of privacy in the offline world (see Chap. 2 ), Palen and Dourish describe how, just like in the real world, social media privacy is a boundary regulation process along various dimensions besides just disclosure [ 104 ]. Privacy can also involve regulating interactional boundaries with friends or followers online and the level of accessibility one desires to those people. For example, if a Facebook user wants to limit the people that can post on their wall, they can exclude certain people. Research has identified other threats to interpersonal boundary regulation that arise out of the unique nature of social media [ 42 ]. First, as mentioned previously, the threat to spatial boundaries occurs because our audiences are obscured so that we no longer have a good sense of whom we may be interacting with. Second, temporal boundaries are blurred because any interaction may now occur asynchronously at some time in the future due to the virtual persistence of data. Third, multiple interpersonal spaces are merging and overlapping in a way that has caused a “steady erosion of clearly situated action” [ 5 ]. Since each space may have different and, at times, mutually exclusive behavioral requirements, acting accordingly within those spaces has become more of a challenge to manage context collapses [ 42 ]. Along with these problems, a major interpersonal boundary regulation challenge is that social media environments often take control of boundary regulation away from the end users. For instance, Facebook’s popular “Timeline” automatically (based on an obscure algorithm) broadcasts an individual’s content and interactions to all of his or her friends [ 41 ]. Thus, Facebook users struggle to keep up to date on how to manage interactions within these spaces as Facebook, not the end user, controls what is shared with whom.

4.1 Boundary Regulation on Social Media

One conceptualization of privacy that has become popular in the recent literature is viewing privacy on social media as a form of interpersonal boundary regulation. These scholars have characterized privacy as finding the optimal or appropriate level of privacy rather than the act of withholding self-disclosures. That is, it is just as important to avoid over disclosing as it is to avoid under disclosing. Therefore, disclosure is considered a boundary that must be regulated so that it is not too much or too little. Petronio’s communication privacy management (CPM) theory emphasizes how disclosing information (see Chap. 2 ) is vital for building relationships, creating closeness, and creating intimacy [ 105 ]. Thus, social isolation and loneliness resulting from under disclosure can be outcomes of privacy regulation gone wrong just as much as social crowding can be an issue. Similarly, the framework of contextual integrity explains that context-relative informational norms define privacy expectations and appropriate information flows and so a disclosure in one context (such as your doctor asking you for your personal medical details) may be perfectly appropriate in that context but not in another (such as your employer asking you for your personal medical details) [ 106 ]. Here it is not just about an information disclosure boundary but about a relationship boundary where the appropriate disclosure depends on the relationship between the discloser and the recipient.

Drawing on Altman’s theory of boundary regulation, Wisniewski et al. created a useful taxonomy detailing the various types of privacy boundaries that are relevant for managing one’s privacy on social media [ 107 ]. They identified five distinct privacy boundaries relevant to social media:

Relationship . This involves regulating who is in one’s social network as well as appropriate interactions for each relationship type.

Network . This consists of regulating access to one’s social connections as well as interactions between those connections.

Territorial . This has to do with regulating what content comes in for personal consumption and what is available in interactional spaces.

Disclosure . The literature commonly focuses on this aspect which consists of regulating what personal and co-owned information is disclosed to one’s social network.

Interactional . This applies to regulating potential interaction with those within and outside of one’s social network.

Of these boundary types, Wisniewski et al. emphasize the most important is maintaining relationship boundaries between people. Similarly, Child and Petronio note that “one of the most obvious issues emerging from the impact of social network site use is the challenge of drawing boundary lines that denote where relationships begin and end” [ 108 ]. Making sure that social media facilitates behavior appropriate to each of the user’s relationships is a major challenge.

Each of these interpersonal boundaries can be further classified into regulation of more fine-grained dimensions. In Table 7.1 , we summarize the different ways that each of these five interpersonal boundaries can be regulated on social media.

Next, we describe each of these interpersonal boundaries in more detail.

Self- and Confidant Disclosures

The information disclosure concerns described in the previous “Privacy Challenges” section are the focus of privacy around disclosure boundaries. Posting norms on social media platforms often encourage the disclosure of one’s personal information (e.g., age, sexual orientation, location, personal images) [ 109 , 110 ]. Disclosing such information can leave one open to financial, personal, and professional risks such as identity theft [ 46 , 111 ]. However, there are motivations for disclosing personal information. For example, research suggests that posting behaviors on social media platforms have a significant relationship with a desire for positive self-presentation [ 112 , 113 ]. Privacy management is necessary for balancing the benefits of disclosure and its associated risks. This involves regulating both self-disclosure for information about one’s self and confidant-disclosure boundaries for information that is “co-owned” with others [ 105 ] (e.g., a photograph that includes other people, or information about oneself that is shared with another in confidence).

There are a variety of disclosure boundary regulation mechanisms on social media interfaces. Many platforms offer users the freedom to selectively share various types of information, create personal biographies, share links to their websites, or post their birthday. Self-disclosure can also be maintained through privacy settings such as granular control over who has access to specific posts. The level of information one wishes to disclose could be managed by various privacy settings. Many social media platforms encourage multiparty participation with features such as tagging, subtweeting, or replying to others’ posts. This level of engagement promotes the celebration of shared moments or co-owned information/content. At the same time, it increases possibilities for breaching confidentiality and can create unwanted situations such as posting congratulations to a pregnancy that has not yet been announced to most family members or friends. Some ways that people manage violations of disclosure boundaries are to reactively confront the violator in private or to stop using the platform after the unexpected disclosure [ 114 ].

Relationship Connection and Context

Relationship boundaries have to do with who the user accepts into his or her “friend group” and consequently shapes the nature of online interactions within a person’s social network. Social media platforms have embedded the idea of “friend-based privacy” where information and interactional access is primarily dependent on one’s connections. The structure of one’s network can affect the level of engagement and the types of disclosures made on a platform. Individuals with more open relationship boundaries may have higher instances of weak ties compared to others who may employ stricter rules for including people into their inner circles. For example, studies have found people who engage in “hyper-adding,” namely, adding a significant number of persons to their network which could result in a higher distribution of “weak ties” [ 53 , 82 ].

After users accept friends and make connections, they must manage overlapping contexts such as work, family, or acquaintances. This leads to the types of privacy issues discussed under “Context Collapse” in the previous “Privacy Challenges” section. Research shows that boundary violations are hardly remedied by blocking or unfriending unless in extreme cases [ 115 ]. Furthermore, users rarely organize their friends into groups (and some social media platforms do not offer that functionality) [ 114 ]. People are either unaware of the feature, think it takes too much time, or are concerned that the wrong person would still see their information. As a result, users often feel they have to sacrifice being authentic online to control their privacy.

Network Discovery and Interaction

An individual’s social media network is often public knowledge, and there are advantages and disadvantages of having friends being aware of one’s social connections (aka friends list or followers). Network boundary mechanisms enable people to identify groups of people and manage interactions between the various groups. We highlight two types of network boundaries, namely, network discovery and network intersection boundaries. First, network discovery boundaries are primarily centered around the act of regulating the type of access others have to one’s network connections. Implementing an open approach to network discovery boundaries may create problems that may arise including competition as competitors within the same industry could steal clients by carefully selecting from a publicly facing friend list. Another issue arises when a person’s friend does not have a good reputation and that connection is negatively received by others within that social group. Sometimes the result is positive, for example, when friends or family find they have mutual connections, thus building social capital. Some social media platforms offer the ability to hide friend groups from everyone.

Network intersection boundaries involve the regulation of the interactions among different friend groups within one’s social network. Social media users have expressed the benefits of engaging in discourse online with people who they may not personally know offline [ 116 ]. In contrast, clashes within one’s friend list due to opposing political views or personal stances could create tensions that would make moderating a post difficult. These boundaries could be harder to control and sometimes lead to conflict if one is forced to choose which friends can participate in discussions.

Inward- and Outward-Facing Territories

Territorial boundaries include “places and objects in the environment” to indicate “ownership, possession, and occasional active defense” [ 117 ]. Within social media, there are features that are either inward-facing territories or outward-facing territories. Inward-facing territories are commonly characterized as spaces where users could find updates on their friends and see the content their connections were posting (such as the “news feed” on Facebook or “updates” on LinkedIn). To control their inward-facing territories, individuals could hide posts from specific people, adjust their privacy settings, and use filters to find specific information.

These territories are constantly being updated with photos, videos, and news articles that are personalized and not public facing which contributes to an overall low priority for territorial management [ 114 ]. Most choose to ignore content that is irrelevant to them rather than employing privacy features. In addition, once privacy features are used to hide content from particular friends, users rarely revisit that decision to reconsider including content within that territory from that person.

It is important to note that the key characteristic of outward-facing territory management is the regulation of potentially unsatisfactory interactions rather than a fear of information exposure. One example of an outward-facing territory is Facebook’s wall/timeline, where a person’s friend may contribute to your social media presence. Outward-facing territories fall between a public and private place, which creates more risk of unintended boundary violations. Altman argues that “because of their semipublic quality [outward-facing territories] often have unclear rules regarding their use and are susceptible to encroachment by a variety of users, sometimes inappropriately and sometimes predisposing to social conflict” [ 117 ]. Similar to confidant disclosure described above, connections may post (unwanted) content on a user’s wall that could lead to turbulence if that content is later deleted.

Interactional Disabling and Blocking

Interactional boundaries limit the need for other boundary regulations discussed because a person reduces access to oneself by disabling features [ 114 ]. For example, a user may deactivate Facebook Messenger to avoid receiving messages but reactivate the app when they deem that interaction to be welcomed. In a similar regard, disabling semipublic features of the interface (such as the wall on Facebook) could assist users in having a greater sense of control. This manifestation of interaction withdrawal is typically not directed at reducing interaction with a specific person; rather, it may be motivated by a high desire to control one’s online spaces. As such, disabling features are associated with perceptions of mistrust within one’s network and a desire to limit interruptions [ 115 ]. On the more extreme end, blocking could also be employed to regulate interactional boundaries. Unlike other withdrawal mechanisms such as disabling your wall, picture tagging, or chat, blocking is inherently targeted. The act represents the rejection and revocation of access to oneself from a particular party. Some social media platforms allow users to block other people or pages, meaning that the blocked person may not contact or interact with the user in any form. Generally, blocking a person results from a negative experience such as stalking or being bombarded with unwanted content [ 118 ].

4.2 Coping with Social Media Privacy Violations

Overtime, many social media platforms have implemented new privacy features that attempt to address evolving privacy risks and users’ need for more granular control online. While this effort is commendable, Ellison et al. argue that “privacy behaviors on social networking sites are not limited to privacy settings” [ 41 ]. Thus, social media users still venture outside the realm of privacy settings to achieve appropriate levels of social interactions. Coping mechanisms can be viewed as behaviors utilized to maintain or regain interpersonal boundaries [ 107 ]. Although these coping approaches may often be suboptimal, Wisniewski et al.’s framework of coping strategies for maintaining one’s privacy provides insight into the struggles many social media users face in maintaining these boundaries.

This approach is often defined as the “reduction of intensity of inputs” [ 117 ]. Filtering includes selecting whom one will accept into their online social circle and is often used in the management of relational boundaries. Filtering techniques may include relying on social cues (e.g., viewing the profile picture or examining mutual friends) before confirming the addition of a new connection. Other methods leverage non-privacy-related features that are repurposed to manage interactions based on relation context, for example, creating multiple accounts on the same platform to separate professional connections from personal friends.

The vast amount of information on social media could easily become overwhelming and difficult to consume. Therefore, social media users may opt to ignore posts or skim through information to decide which ones should receive priority for engagement. Ignoring is most common for inward-facing territories such as your “Feed” page. The overreliance on this approach might increase the chances of missing critical moments that connections shared.

Blocking is a more extreme approach to interactional boundary management compared to filtering and ignoring, which contributes to lower levels of reported usage [ 119 ]. As an alternative, users have developed other technology-supported mechanisms that would allow them to avoid unwanted interactions. As an example, Wisniewski et al. describe using pseudonyms on Facebook to make it more difficult to find a user on the platform [ 107 ]. Another method for blocking unwanted interactions is to use the account of a close friend or loved one to enjoy the benefits of the content on the platform without the hassle of expected interactions. Page et al. highlight this type of secondary use for those who avoid social media because of social anxieties, harassment, and other social barriers [ 120 ].

When some users feel they are losing control, they withdraw from social media by doing one of the following: deleting their account, censoring their posts, or avoiding confrontation. As a result, a common technique is limiting or adjusting the information shared (even avoiding posts that may be received negatively) [ 121 ]. Das and Kramer found that “people with more boundaries to regulate censor more; people who exercise more control over their audience censor more content; and, users with more politically and age diverse friends censor less, in general” [ 122 ]. Withdrawal suggests that some users think the risks outweigh the benefits of social media.

Unlike offensive coping mechanisms such as filtering, blocking, or withdrawal, social media users resort to more defensive mechanisms when the intention is to create interactions that may be confrontational. Aggressive behavior is displayed when the goal is to seek revenge or garner attention from specific people or groups. Some users may choose to exploit subliminal references in their posts to indirectly address or offend specific persons (e.g., an ex-partner, coworker, family member).

Compliance is giving in to pressures (external or internal) and adjusting one’s interpersonal boundary preferences for others. Altman describes this as “repeated failures to achieve a balance between achieved and desired levels of privacy” [ 117 ]. Relinquishing one’s interactional privacy needs to accommodate pressures of disclosure, nondisclosure, or friending preferences could result in a perceived loss of control over social interactions.

A healthy strategy for managing social media boundary violations is communicating with the other person involved and finding a resolution. Prior work indicates that most users that compromise do so offline [ 107 ]. These compromises are mostly with closer friends who the user can contact through email, phone call, or messaging. These more private scenarios avoid other people becoming involved online. Also, many compromises are about tagging someone in photos or sharing personal information about another user (i.e., confidant disclosure).

In addition to this coping framework for social media privacy, Stutzman examined the creation of multiple profiles on social media websites, primarily Facebook, as an information regulation mechanism. Through grounded theory, he identified three types of information boundary regulation within this context (pseudonymity, practical obscurity, and transparent separations) and four overarching motives for these mechanisms (privacy, identity, utility, and propriety) [ 71 ]. Lampinen et al. created a framework of strategies for managing private versus public disclosures. It defined three dimensions by which strategies differed: behavioral vs. mental, individual vs. collaborative, and preventative vs. corrective [ 71 , 123 ]. The various coping frameworks conceptualize privacy as a process of interpersonal boundary regulation. However, they do not solve the problem of managing privacy on these platforms. They do attempt to model the complexity of privacy management in a way that better reflects the complex nature of interpersonal relationships rather than as a matter of withholding versus disclosing private information.

5 Addressing Privacy Challenges

Rather than just measuring privacy concerns, researchers and designers should focus on understanding attitudes towards boundary regulation. Validated tools for measuring boundary preservation concern and boundary enhancement expectations are provided in this chapter.

Privacy features need to be designed to account for individual differences in how they are perceived and used. While some feel features like untag, unfriend, and delete are useful, others are worried about how using such features will impact their relationships.

Unaddressed privacy concerns can serve as a barrier to using social media. It is crucial to design for not only functional privacy concerns (e.g., being overloaded by information, guarding from inappropriate data access) but social privacy concerns as well (e.g., unwelcome interactions, pressures surrounding appropriate self-presentation).

This section describes how to better identify privacy concerns by measuring them from a boundary regulation perspective. We also emphasize the importance of individual differences when designing privacy features. Finally, we elaborate on a crucial set of social privacy issues that we feel are a priority to address. While many social media users may feel these types of social pressures to some degree, these problems have pushed some of society’s most vulnerable to complete abandonment of social media despite their desire for social connection. We call on social media designers and researchers to focus on these problems which are a side effect of the technologies we have created.

5.1 Understanding People and Their Privacy Concerns

Understanding social media privacy as a boundary regulation allows us to better conceptualize people’s attitudes and behaviors. It helps us anticipate their concerns and balance between too little or too much privacy. However, many existing tools for measuring privacy come from the information privacy perspective [ 124 , 125 , 126 ] and focus on data collection by organizations, errors, secondary use, or technical control of data. In detailing the various types of privacy boundaries that are relevant for managing one’s privacy on social media, Wisniewski et al. [ 114 ] emphasized that the most important is maintaining relationship boundaries between people.

Page et al. [ 86 , 127 ] similarly found that concerns about damaging relationship boundaries are actually at the root of low-level privacy concerns such as worrying about who sees what, being too accessible, or being bothered or bothering others by sharing too much information. For instance, a typically cited privacy concern such as being worried about a stranger knowing one’s current location turns out to be a privacy concern only if an individual expects that a stranger might violate typical relationship expectations. Their research revealed that many people were unconcerned about strangers knowing their location and explained that no one would care enough to use that information to come find them. They did not expect anyone to violate relationship boundaries and so were privacy unconcerned. On the other hand, those who felt there was a likelihood of someone using their location for nefarious purposes were privacy concerned. Social media enabling a negative change in relationship boundaries and the types of interactions that are now possible (such as strangers now being able to locate me) drives privacy concerns.

In fact, while scholars have used many lower-level privacy concerns such as being worried about sharing information to predict social media usage and adoption, they have met with mixed success leading to the commonly observed privacy paradox. However, research shows that preserving one’s relationship boundaries is at the root of these low-level online privacy concerns (e.g., informational, psychological, interactional, and physical privacy concerns) and is a significant predictor of social media usage [ 86 , 127 ]. In other words, concerns about social media damaging one’s relationships (aka relationship boundary regulation) are what drives privacy concerns.

5.2 Measuring Privacy Concerns

Boundary regulation plays a key role in maintaining the right level of privacy on social media, but how do we evaluate whether a platform is adequately supporting it? A popular scale for testing users’ awareness of secondary access is the Internet Users’ Information Privacy Concerns (IUIPC) scale, which measures their perceptions of collection, control, and awareness of user data [ 125 ]. An important finding is that users “want to know and have control over their information stored in marketers’ databases.” This indicates that social media should be designed such that people know where their data goes. However, throughout this chapter, it is evident that research on social media privacy has found concerns about social privacy more salient. In fact, the focus on relationship boundaries is a key privacy boundary to consider and measure in evaluating privacy concerns. Thus, having a scale to measure relationship boundary regulation would allow researchers and designers to better evaluate social media privacy.

Here we present validated relationship boundary regulation survey items developed by Page et al. which predict adoption and usage for various social media including Facebook, Twitter, LinkedIn, Instagram, and location-sharing social media [ 127 , 128 ]. These survey items can be used to evaluate privacy concerns for use of existing social media platforms, as well as capturing attitudes about new features or platforms. The survey items capture attitudes about one’s ability to regulate relationship boundaries when using a social media platform and are administered with a 7-point Likert scale (−3 = Disagree Completely, −2 = Disagree Mostly, −1 Disagree Slightly, 0 = Neither agree nor disagree, 1 = Agree Slightly, 2 = Agree Mostly, 3 = Agree Completely). These items measure both concerns and positive expectations.

When evaluating a new or existing social media platform, the relationship boundary preservation concern (BPC) items can be used to gauge user’s concerns about harming their relationships. A higher score would indicate that more support for privacy management is needed on a given platform. The relationship boundary enhancement expectation (BEE) items can also be used to evaluate whether users expect that using the platform will improve the user’s relationships. A high score is important to driving adoption and usage – having low concerns alone is not enough to drive usage. Along similar lines, even if users have high concerns, they may be counteracted by a perceived high level of benefits and so users remain frequent users of a platform. For instance, Facebook, one of the most widely used platforms, was shown to both invoke high levels of concern as well as high levels of enhancement expectation [ 127 ]. However, note that high frequency of use does not necessarily mean high levels of engagement (e.g., posting, commenting) or that users do not employ suboptimal workarounds (e.g., being vague in their posts) [ 81 ]. On the other hand, Twitter has a higher level of concerns compared to perceived enhancement and, accordingly, lower levels of usage [ 127 ].

In the validation studies, the set of survey items representing BPC were treated as a scale and factor analysis used to compute a single score. Similarly, the ones representing BEE were used to generate a single factor score to represent that construct. These could be used to evaluate new features or platforms in the lab or after deployment. For instance, after performing tasks on a new feature or platform, the user can answer these questions and the designer can compare the responses between different designs in A/B testing, or to predict usage frequency and adoption intentions (e.g., see [ 127 , 129 ] for detailed examples). Moreover, by correlating BPC or BEE with demographics or other customer segmentations (e.g., age, whether they are new customers, purpose for using the platform), product designers may be able to identify attitudes that are connected with certain segments of their customer base and address it directly.

5.3 Designing Privacy Features

When designing for privacy features, a crucial aspect to consider is individual differences. Privacy is not one-size-fits-all: there are many variations in how people feel, what they expect, and how they behave. Because social media connects individuals with diverse needs and expectations, and from a myriad of contexts, a necessity in addressing social media privacy is understanding individual differences in privacy attitudes and behaviors. Many individual differences have been identified that shape privacy needs and preferences [ 15 ] and behaviors [ 6 , 24 , 99 ].

Scholars have established that privacy as a construct is not limited to informational privacy (i.e., understanding the flow of data) but also includes social privacy concerns that may be more interactional (e.g., accessibility) or psychological in nature (e.g., self-presentation) [ 111 , 130 ]. Thus, a host of attitudes and experiences could shape an individual’s view on what it means to have privacy online. For example, people’s preferences for privacy tools could be heavily influenced by the type of data being shared or the recipient of that data [ 36 , 131 , 132 ]. Likewise, prior experiences (negative or positive) could shape how people interact online which could affect disclosure [ 133 ]. Context and relevance have also been found to significantly influence privacy behavior online. Drawing from the contextual integrity framework, many researchers argue that when people perceive data collection to be reasonable or appropriate, they are more likely to share information [ 134 ]. On the other hand, research has shown that when faced with uncomfortable scenarios, people employ privacy protective behaviors such as nondisclosure or falsifying information [ 135 ]. Research has also pointed to personal characteristics that could shape digital privacy behavior such as personality, culture, gender, age, and social norms [ 64 , 106 , 136 , 137 , 138 , 139 , 140 ].

While identifying concerns about damaging one’s relationships is important to measure, understanding the individual differences that can lead someone to be concerned can provide insight into addressing these concerns. For instance, through a series of investigations, Page et al. uncovered a communication style that predicts concerns about preserving relationship boundaries on many different social media platforms [ 127 , 128 , 129 ]. This communication style is characterized by wanting to put information out there so that the individual does not need to proactively inform others. Those who prefer an FYI (For Your Information) communication style are less concerned about relationship boundary preservation and, as a result, exhibit higher levels of engagement, interactions, and use of social media than low FYI communicators. For example, the survey items that capture an FYI communication style preference for location-sharing social media are: “I want the people I know to be aware of my location, without having to bother to tell them,” “I would prefer to make my location available to the people I know, so that they can see it whenever they need it,” and “The people I know should be able to get my location whenever they feel they need it.” Each item is administered with a 7-point Likert scale (Disagree strongly, Disagree moderately, Disagree slightly, Neutral, Agree slightly, Agree moderately, Agree strongly). For other social media platforms, the information type is adjusted (i.e., “what I’m up to” instead of “my location”).

Consequently, this raises concern over implications for non-FYI communicators since the design of major social media platforms is catered to FYI communicators [ 127 , 128 ]. Drawing on this insight, Page demonstrated how considering the user’s communication style when designing location-sharing social media interfaces can alleviate boundary preservation concerns [ 129 ]. Certain design choices such as choosing a request-based location-sharing interaction can lower concerns for non-FYI communicators, while continuous location-sharing and check-in type interactions that are typical in social media may be fine for FYI communicators.

This demonstrates that researchers should consider in the design of social media individual differences that affect privacy attitudes. Another individual difference in attitudes towards privacy features is a user’s apprehension that using common features such as untag, delete, or unfriend/unfollow can act as a hindrance in their relationships with others. Page et al. identified that while many use privacy features and perceive them as a tool useful for protecting their privacy, there are also many who are concerned about how using privacy features could hurt their relationships with others (e.g., being worried about offending others by untagging or unfriending) [ 81 ]. Instead, those individuals would use alternative privacy management tactics such as vaguebooking (not sharing specific details and using vague posts). Designers need to be aware that privacy features also need to be catered to individual variations in attitudes as well or else they may be ineffective and unused by certain segments of the user population.

5.4 Privacy Concerns and Social Disenfranchisement

A significant amount of research within the domain of social media nonuse has been focused on functional barriers that hinder adoption. In many cases, nonuse is traced to a lack of access (e.g., limited access to technology, financial resources, or the Internet). However, the push against adoption and subsequent usage can be voluntary [ 141 ] due to functional privacy concerns such as concerns about data breaches, information overload, or annoying posts [ 120 ]. Several social media companies have also implemented features such as time limits to help users counter overuse [ 142 ].

Likewise, it is equally important to consider social barriers that prevent social media engagement for people who really could use the social connection. Sharing about distressing experiences can be beneficial and reduce stigma, improve connection and interpersonal relationships with one’s network, and enhance well-being [ 6 , 7 , 143 , 144 ]. However, Page et al. identified a class of barriers that highlight social privacy concerns rooted in social anxiety or concerns about being overly influenced by others on social media. This is in contrast to the prior school of thought that focused primarily on functional motivations as barriers that influence nonuse (see Fig. 7.1 ) [ 120 ]. They point out that many who are already vulnerable avoid social media due to social barriers such as online harassment or paralysis over making decisions pertaining to online social interactions. Yet, they are also the ones who could benefit greatly from social connection and who end up losing touch with friends and social support by being off social media. They term this lose-lose situation of negative social consequences that arise when using social media as well as consequences from not using it, social disenfranchisement . They call on designers to address such social barriers and to realize that in designing the user experience to connect users so well, they are implicitly designing the nonuser experience of being left out. Given that social media usage may not always be a viable option, designers should design to alleviate the negative consequences of nonuse.

figure 1

Extension of Wyatt’s frame that divided nonusers along the dimensions of whether someone has used the technology in the past and the motivation for adoption (extrinsic, e.g., organizationally imposed, versus intrinsic, e.g., desire to communicate through technology). Page et al. differentiate between functional motivations/barriers of use (which has been the focus of much research) versus social motivations/barriers to use. Other frameworks consider additional temporal states of adoption (whether they are currently using and whether they will in the future). See [ 120 ] for more detailed descriptions

5.5 Guidelines for Designing Privacy-Sensitive Social Media

Now that you have learned about various privacy problems related to social media use, how do you apply that to designing or studying social media? Here are some practical guidelines.

Identifying Privacy Attitudes

Measuring privacy attitudes is a tricky task. Using existing informational privacy scales, users often say they are concerned, but this does not end up matching their actual behavior. By approaching it from a boundary regulation perspective, it will be easier to identify the proper balance between sharing too much and sharing too little. The survey items described in this chapter offer a way to measure concerns about boundary regulation as well as positive expectations. Considering both are key to more accurately predicting user behaviors.

Understanding Your Target Population

Some key characteristics are described in this chapter. Identifying these in your target population can help you be aware of individual differences that might affect privacy preferences on social media. When you are measuring privacy concerns, matching the preferences of your audience makes it more likely that they will have a good user experience. Pay particular attention to traits that have been identified as being related to usage and adoption of social media platforms, such as the FYI communication style which can be measured using the survey items provided in this chapter.

Evaluating Privacy Features

Focus on understanding whether users perceive your privacy features as useful or perhaps as posing a relational hindrance. The survey items provided in this chapter can help you do so. When anticipating privacy needs of your social media users, make sure you identify features that may impact boundary regulation both positively and negatively. You can compare attitudes between the existing feature and the newer version of the feature that will/has been deployed. You can also correlate attitudes towards privacy features with individual characteristics – some subpopulation of users may see privacy features as useful, while others may consider them a relational hindrance.

6 Chapter Summary

Social media has been widely adopted and quickly become an integral part of social, personal, economic, political, professional, and instrumental welfare. Understanding how mediated social interactions change the assumptions around audience management, disclosure, and self-presentation is key to working towards reconciling offline privacy assumptions with new realities. Moreover, given the rapidly changing landscape of widely available social media platforms, researchers and designers need to continually re-evaluate the privacy implications of new services, features, and interaction modalities.

With the rise of networked individualism, an especially strong emphasis must be placed on understanding individual characteristics and traits that can shape a user’s privacy expectations and needs. Given the inherently social nature of social media, understanding social norms and the influence of larger cultural and structural factors is also important for interpreting expectations of privacy and the significance around various social media behaviors.

Privacy does not have a one-size-fits-all solution. It is a normative construct that is context dependent and can change over time, from culture to culture, and person to person. It needs to be weighed across different individuals and against other important goals and values of the larger group or society. Because people and their social interactions can be complex, designing for social media privacy is usually not a straightforward task. However, the consequences of not addressing privacy issues can range from irritating to devastating. Using this chapter as a guide and taking the steps to think through privacy needs and expectations of your social media users is an integral part of designing for social media.

Quan-Haase, Anabel, and Alyson L. Young. 2010. Uses and gratifications of social media: A comparison of Facebook and instant messaging. Bulletin of Science, Technology & Society 30 (5): 350–361.

Article   Google Scholar  

Gruzd, Anatoliy, Drew Paulin, and Caroline Haythornthwaite. 2016. Analyzing social media and learning through content and social network analysis: A faceted methodological approach. Journal of Learning Analytics 3 (3): 46–71.

Yang, Huining. 2020. Secondary-school Students’ Perspectives of Utilizing Tik Tok for English learning in and beyond the EFL classroom. In 2020 3rd International Conference on Education Technology and Social Science (ETSS 2020) , 163–183.

Google Scholar  

Van Dijck, José. 2012. Facebook as a tool for producing sociality and connectivity. Television & New Media 13 (2): 160–176.

Grudin, Jonathan. 2001. Desituating action: Digital representation of context. Human–Computer Interaction 16 (2–4): 269–286.

Andalibi, Nazanin, Oliver L. Haimson, Munmun De Choudhury, and Andrea Forte. 2016. Understanding social media disclosures of sexual abuse through the lenses of support seeking and anonymity. In Proceedings of the 2016 CHI conference on human factors in computing systems , 3906–3918.

Andalibi, Nazanin, Pinar Ozturk, and Andrea Forte. 2017. Sensitive self-disclosures, responses, and social support on Instagram: The case of #depression. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing , 1485–1500.

Lin, Han, William Tov, and Qiu Lin. 2014. Emotional disclosure on social networking sites: The role of network structure and psychological needs. Computers in Human Behavior 41: 342–350.

Burke, Moira, Cameron Marlow, and Thomas Lento. 2010. Social network activity and social well-being. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , ACM, 1909–1912.

Ellison, Nicole B., Charles Steinfield, and Cliff Lampe. 2007. The benefits of Facebook “Friends:” Social capital and college students’ use of online social network sites. Journal of Computer-Mediated Communication 12 (4): 1143–1168.

———. 2011. Connection strategies: social capital implications of Facebook-enabled communication practices. New Media & Society 13 (6): 873–892.

Koroleva, Ksenia, Hanna Krasnova, Natasha Veltri, and Oliver Günther. 2011. It’s all about networking! Empirical investigation of social capital formation on social network sites. In ICIS 2011 Proceedings .

Fischer-Hübner, Simone, Julio Angulo, Farzaneh Karegar, and Tobias Pulls. 2016. Transparency, privacy and trust–technology for tracking and controlling my data disclosures: Does this work? In IFIP International Conference on Trust Management , Springer, 3–14.

Xu, Heng, Hock-Hai Teo, Bernard C.Y. Tan, and Ritu Agarwal. 2012. Research note-effects of individual self-protection, industry self-regulation, and government regulation on privacy concerns: A study of location-based services. Information Systems Research 23 (4): 1342–1363.

Boyd, Danah. 2002. Faceted Id/Entity: Managing Representation in a Digital World . Retrieved August 14, 2020 from https://dspace.mit.edu/handle/1721.1/39401 .

Boyd, Danah M., and Nicole B. Ellison. 2007. Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication 13 (1): 210–230.

Dwyer, C., S.R. Hiltz, M.S. Poole, et al. 2010. Developing reliable measures of privacy management within social networking sites. In System Sciences (HICSS), 2010 43rd Hawaii International Conference on , 1–10.

Hargittai, E. 2007. Whose space? Differences among users and non-users of social network sites. Journal of Computer-Mediated Communication 13: 1.

Tufekci, Zeynep. 2008. Grooming, Gossip, Facebook and Myspace. Information, Communication & Society 11 (4): 544–564.

Kane, Gerald C., Maryam Alavi, Giuseppe Joe Labianca, and Stephen P. Borgatti. 2014. What’s different about social media networks? A framework and research agenda. MIS Quarterly 38 (1): 275–304.

Pew Research Center. 2019. Social Media Fact Sheet . Pew Research Center: Internet, Science & Technology. Retrieved November 27, 2020 from https://www.pewresearch.org/internet/fact-sheet/social-media/ .

Fire, M., R. Goldschmidt, and Y. Elovici. 2014. Online social networks: Threats and solutions. IEEE Communications Surveys Tutorials 16 (4): 2019–2036.

Social Media Users. DataReportal – Global Digital Insights . Retrieved March 16, 2021 from https://datareportal.com/social-media-users .

Alalwan, Ali Abdallah, Nripendra P. Rana, Yogesh K. Dwivedi, and Raed Algharabat. 2017. Social media in marketing: A review and analysis of the existing literature. Telematics and Informatics 34 (7): 1177–1190.

Binns, Reuben, Jun Zhao, Max Van Kleek, and Nigel Shadbolt. 2018. Measuring third-party tracker power across web and mobile. ACM Transactions on Internet Technology 18 (4): 52:1–52:22.

Barnard, Lisa. 2014. The cost of creepiness: How online behavioral advertising affects consumer purchase intention.

Dolin, Claire, Ben Weinshel, Shawn Shan, et al. 2018. Unpacking perceptions of data-driven inferences underlying online targeting and personalization. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems , ACM, 493.

Ur, Blase, Pedro Giovanni Leon, Lorrie Faith Cranor, Richard Shay, and Yang Wang. 2012. Smart, useful, scary, creepy: Perceptions of online behavioral advertising. In Proceedings of the eighth symposium on usable privacy and security , ACM, 4.

Dogruel, Leyla. 2019. Too much information!? Examining the impact of different levels of transparency on consumers’ evaluations of targeted advertising. Communication Research Reports 36 (5): 383–392.

Hamilton, Isobel Asher, and Dean Grace. Signal downloads skyrocketed 4,200% after WhatsApp announced it would force users to share personal data with Facebook. It’s top of both Google and Apple’s app stores. Business Insider . Retrieved February 1, 2021 from https://www.businessinsider.com/whatsapp-facebook-data-signal-download-telegram-encrypted-messaging-2021-1 .

Wilkinson, Daricia, Moses Namara, Karishma Patil, Lijie Guo, Apoorva Manda, and Bart Knijnenburg. 2021. The Pursuit of Transparency and Control: A Classification of Ad Explanations in Social Media .

Lee, Rainie, and Barry Wellman. 2012. Networked . Cambridge, MA: MIT Press.

Dunbar, Robin. 2011. How many" friends" can you really have? IEEE Spectrum 48 (6): 81–83.

Carr, Caleb T., and Rebecca A. Hayes. 2015. Social media: Defining, developing, and divining. Atlantic Journal of Communication 23 (1): 46–65.

Xu, Heng, Tamara Dinev, H. Smith, and Paul Hart. 2008. Examining the Formation of Individual’s Privacy Concerns: Toward an Integrative View.

Consolvo, Sunny, Ian E Smith, Tara Matthews, Anthony LaMarca, Jason Tabert, and Pauline Powledge. 2005. Location disclosure to social relations: Why, when, & what people want to share. 10.

Wiese, Jason, Patrick Gage Kelley, Lorrie Faith Cranor, Laura Dabbish, Jason I. Hong, and John Zimmerman. 2011. Are you close with me? Are you nearby?: Investigating social groups, closeness, and willingness to share. UbiComp 10.

Xu, Heng, and Sumeet Gupta. 2009. The effects of privacy concerns and personal innovativeness on potential and experienced customers’ adoption of location-based services. Electronic Markets 19 (2–3): 137–149.

Acquisti, A., and R. Gross. 2006. Imagined communities: Awareness, information sharing, and privacy on the Facebook. Privacy Enhancing Technologies : 36–58.

Debatin, Bernhard, Jennette P. Lovejoy, Ann-Kathrin Horn, and Brittany N. Hughes. 2009. Facebook and online privacy: Attitudes, behaviors, and unintended consequences. Journal of Computer-Mediated Communication 15 (1): 83–108.

Ellison, Nicole B., Jessica Vitak, Charles Steinfield, Rebecca Gray, and Cliff Lampe. 2011. Negotiating privacy concerns and social capital needs in a social media environment. In Privacy Online: Perspectives on Privacy and Self-Disclosure in the Social Web , ed. S. Trepte and L. Reinecke, 19–32. Berlin: Springer.

Chapter   Google Scholar  

Tufekci, Z. 2008. Can You See Me Now? Audience and Disclosure Regulation in Online Social Network Sites . Retrieved January 29, 2021 from https://journals.sagepub.com/doi/abs/10.1177/0270467607311484 .

Ayalon, Oshrat and Eran Toch. 2013. Retrospective privacy: Managing longitudinal privacy in online social networks. In Proceedings of the Ninth Symposium on Usable Privacy and Security – SOUPS ’13 , ACM Press, 1.

Meeder, Brendan, Jennifer Tam, Patrick Gage Kelley, and Lorrie Faith Cranor. 2010. RT @IWantPrivacy: Widespread Violation of Privacy Settings in the Twitter Social Network . 12.

Padyab, Ali, and Tero Pã. Facebook Users Attitudes towards Secondary Use of Personal Information . 20.

van der Schyff, Karl, Stephen Flowerday, and Steven Furnell. 2020. Duplicitous social media and data surveillance: An evaluation of privacy risk. Computers & Security 94: 101822.

Symeonidis, Iraklis, Gergely Biczók, Fatemeh Shirazi, Cristina Pérez-Solà, Jessica Schroers, and Bart Preneel. 2018. Collateral damage of Facebook third-party applications: A comprehensive study. Computers & Security 77: 179–208.

Binder, Jens, Andrew Howes, and Alistair Sutcliffe. 2009. The problem of conflicting social spheres: Effects of network structure on experienced tension in social network sites. In Proceedings of the 27th international conference on Human factors in computing systems – CHI 09 , ACM Press, 965.

Marwick, Alice E., and Danah Boyd. 2011. I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media & Society 13 (1): 114–133.

Sibona, Christopher. 2014. Unfriending on Facebook: Context collapse and unfriending behaviors. In 2014 47th Hawaii International Conference on System Sciences , 1676–1685.

Boyd, Danah Michele. 2004. Friendster and publicly articulated social networking. In Extended Abstracts of the 2004 Conference on Human Factors and Computing Systems – CHI ’04 , ACM Press, 1279.

Brzozowski, Michael J., Tad Hogg, and Gabor Szabo. 2008. Friends and foes: Ideological social networking. In Proceeding of the Twenty-Sixth Annual CHI Conference on Human Factors in Computing Systems – CHI ’08 , ACM Press, 817.

Boyd, Danah. 2006. Friends, Friendsters, and MySpace Top 8: Writing community into being on social network sites. First Monday .

Vitak, Jessica, Cliff Lampe, Rebecca Gray, and Nicole B Ellison. “Why won’t you be my Facebook friend?”: Strategies for Managing Context Collapse in the Workplace . 3.

Dennen, Vanessa P., Stacey A. Rutledge, Lauren M. Bagdy, Jerrica T. Rowlett, Shannon Burnick, and Sarah Joyce. 2017. Context collapse and student social media networks: Where life and high school collide. In Proceedings of the 8th International Conference on Social Media & Society - #SMSociety17 , ACM Press, 1–5.

Pike, Jacqueline C., Patrick J. Bateman, and Brian S. Butler. 2018. Information from social networking sites: Context collapse and ambiguity in the hiring process. Information Systems Journal 28 (4): 729–758.

Heussner, Ki Mae and Dalia Fahmy. Teacher loses job after commenting about students, parents on Facebook. ABC News . Retrieved November 19, 2020 from https://abcnews.go.com/Technology/facebook-firing-teacher-loses-job-commenting-students-parents/story?id=11437248 .

Torba, Andrew. 2019. High school teacher fired for tweets criticizing illegal immigration. Gab News . Retrieved November 19, 2020 from https://news.gab.com/2019/09/16/high-school-teacher-fired-for-tweets-criticizing-illegal-immigration/ .

Hall, Gaynor, and Courtney Gousman. 2020. Suburban teacher’s social media post sparks outrage, internal investigation | WGN-TV. WGNTV . Retrieved November 19, 2020 from https://wgntv.com/news/chicago-news/suburban-teachers-social-media-post-sparks-outrage-internal-investigation/ .

Davis, Jenny L., and Nathan Jurgenson. 2014. Context collapse: Theorizing context collusions and collisions. Information, Communication & Society 17 (4): 476–485.

Kaul, Asha, and Vidhi Chaudhri. 2018. Do celebrities have it all? Context collapse and the networked publics. Journal of Human Values 24 (1): 1–10.

Donnelly, Erin. 2019. Kim Kardashian mom-shamed over photo of North staring at a phone: “Give her a book.” Yahoo! Entertainment . Retrieved April 11, 2021 from https://www.yahoo.com/entertainment/kim-kardashian-mom-shamed-north-west-phone-book-151126429.html .

Sutton, Jeannette, Leysia Palen, and Irina Shklovski. 2008. Backchannels on the Front Lines: Emergent Uses of Social Media in the 2007 Southern California Wildfires . 9.

Litt, Eden. 2012. Knock, knock. Who’s there? The imagined audience. Journal of Broadcasting & Electronic Media 56 (3): 330–345.

Litt, Eden, and Eszter Hargittai. 2016. The imagined audience on social network sites. Social Media + Society 2 (1): 2056305116633482.

Li, N., and G. Chen. 2010. Sharing location in online social networks. IEEE Network 24 (5): 20–25.

Stutzman, Fred, and Jacob Kramer-Duffield. 2010. Friends only: Examining a privacy-enhancing behavior in Facebook. In Proceedings of the 28th international conference on Human factors in computing systems – CHI ’10 , ACM Press, 1553.

Jung, Yumi, and Emilee Rader. 2016. The imagined audience and privacy concern on Facebook: Differences between producers and consumers. Social Media + Society 2 (2): 2056305116644615.

Vitak, Jessica. 2015. Balancing Audience and Privacy Tensions on Social Network Sites . 20.

Oolo, Egle, and Andra Siibak. 2013. Performing for one’s imagined audience: Social steganography and other privacy strategies of Estonian teens on networked publics. Institute of Journalism and Communication, University of Tartu, Tartu, Estonia 7: 1.

Stutzman, Fred, and Woodrow Hartzog. 2012. Boundary Regulation in Social Media . 10.

Pounders, Kathrynn, Christine M. Kowalczyk, and Kirsten Stowers. 2016. Insight into the motivation of selfie postings: Impression management and self-esteem. European Journal of Marketing 50 (9/10): 1879–1892.

Krämer, Nicole C., and Stephan Winter. 2008. Impression Management 2.0: The relationship of self-esteem, extraversion, self-efficacy, and self-presentation within social networking sites. Journal of Media Psychology 20 (3): 106–116.

Duguay, Stefanie. 2016. “He has a way gayer Facebook than I do”: Investigating sexual identity disclosure and context collapse on a social networking site. New Media & Society 18 (6): 891–907.

Tang, Karen P., Jialiu Lin, Jason I. Hong, Daniel P. Siewiorek, and Norman Sadeh. 2010. Rethinking location sharing: Exploring the implications of social-driven vs. purpose-driven location sharing. In Proceedings of the 12th ACM International Conference on Ubiquitous Computing , ACM, 85–94.

Back, Mitja D., Juliane M. Stopfer, Simine Vazire, et al. 2010. Facebook profiles reflect actual personality, not self-idealization. Psychological Science 21 (3): 372–374.

Choi, Tae Rang, and Yongjun Sung. 2018. Instagram versus Snapchat: Self-expression and privacy concern on social media. Telematics and Informatics 35 (8): 2289–2298.

Lindqvist, Janne, Justin Cranshaw, Jason Wiese, Jason Hong, and John Zimmerman. 2011. I’m the mayor of my house: Examining why people use foursquare – a social-driven location sharing application. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems – CHI ’11 , ACM Press, 2409.

Page, Xinru, Bart P. Knijnenburg, and Alfred Kobsa. 2013. What a tangled web we weave: Lying backfires in location-sharing social media. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work – CSCW ’13 , ACM Press, 273.

Hogg, Tad, and D Wilkinson. 2008. Multiple Relationship Types in Online Communities and Social Networks . 6.

Page, Xinru, Reza Ghaiumy Anaraky, Bart P. Knijnenburg, and Pamela J. Wisniewski. 2019. Pragmatic tool vs. relational hindrance: Exploring why some social media users avoid privacy features. In Proceedings of the ACM on Human-Computer Interaction 3, CSCW: 1–23.

Fono, D., and K. Raynes-Goldie. 2006. Hyperfriends and beyond: Friendship and social norms on Live Journal. Internet Research Annual .

Zinoviev, Dmitry, and Vy Duong. 2009. Toward understanding friendship in online social networks. arXiv:0902.4658 [cs] .

Smith, Hilary, Yvonne Rogers, and Mark Brady. 2003. Managing one’s social network: Does age make a difference. In Proceedings of the Interact 2003, IOS Press, 551–558.

Ehrlich, Kate, and N. Shami. 2010. Microblogging inside and outside the workplace. Proceedings of the International AAAI Conference on Web and Social Media 4: 1.

Page, Xinru, Alfred Kobsa, and Bart P. Knijnenburg. 2012. Don’t disturb my circles! Boundary preservation is at the center of location-sharing concerns. In Proceedings of the Sixth International AAAI Conference on Weblogs and Social Media , 266–273.

Iachello, Giovanni, and Jason Hong. 2007. End-user privacy in human-computer interaction. Foundations and Trends in Human-Computer Interaction 1 (1): 1–137.

Article   MATH   Google Scholar  

Bentley, Frank R., and Crysta J. Metcalf. 2008. Location and activity sharing in everyday mobile communication. In Proceeding of the Twenty-Sixth Annual CHI Conference Extended Abstracts on Human Factors in Computing Systems – CHI ’08 , ACM Press, 2453.

Tsai, Janice Y., Patrick Gage Kelley, Lorrie Faith Cranor, and Norman Sadeh. Location-Sharing Technologies: Privacy Risks and Controls . 34.

Friedland, Gerald, and Robin Sommer. 2010. Cybercasing the Joint: On the Privacy Implications of Geo-Tagging . 6.

Stefanidis, Anthony, Andrew Crooks, and Jacek Radzikowski. 2011. Harvesting ambient geospatial information from social media feeds.

Awad, Naveen Farag, and M.S. Krishnan. 2006. The personalization privacy paradox: An empirical evaluation of information transparency and the willingness to be profiled online for personalization. MIS Quarterly 30 (1): 13–28.

Chen, Xi, and Shuo Shi. 2009. A literature review of privacy research on social network sites. In 2009 International Conference on Multimedia Information Networking and Security , IEEE, 93–97.

Gerber, Nina, Paul Gerber, and Melanie Volkamer. 2018. Explaining the privacy paradox: A systematic review of literature investigating privacy attitude and behavior. Computers & Security 77: 226–261.

Houghton, David J., and Adam N. Joinson. 2010. Privacy, social network sites, and social relations. Journal of Technology in Human Services 28 (1–2): 74–94.

Pavlou, Paul A. 2011. State of the information privacy literature: Where are we now and where should we go. MIS Quarterly 35 (4): 977–988.

Xu, Feng, Katina Michael, and Xi Chen. 2013. Factors affecting privacy disclosure on social network sites: An integrated model. Electronic Commerce Research 13 (2): 151–168.

Xu, Heng, Rachida Parks, Chao-Hsien Chu, and Xiaolong Luke Zhang. 2010. Information disclosure and online social networks: From the case of Facebook news feed controversy to a theoretical understanding. AMCIS , Citeseer, 503.

Dinev, Tamara, Massimo Bellotto, Paul Hart, Vincenzo Russo, Ilaria Serra, and Christian Colautti. 2006. Privacy calculus model in e-commerce – a study of Italy and the United States. European Journal of Information Systems 15 (4): 389–402.

Selten, Reinhard. 1990. Bounded rationality. Journal of Institutional and Theoretical Economics (JITE)/Zeitschrift für die gesamte Staatswissenschaft 146 (4): 649–658.

Knijnenburg, Bart P., Elaine M. Raybourn, David Cherry, Daricia Wilkinson, Saadhika Sivakumar, and Henry Sloan. 2017. Death to the privacy calculus? In Proceedings of the 2017 Networked Privacy Workshop at CSCW , Social Science Research Network.

Dienlin, Tobias, and Sabine Trepte. Is the privacy paradox a relic of the past? An in-depth analysis of privacy attitudes and privacy behaviors. European Journal of Social Psychology 45 (3): 285–297.

Kokolakis, Spyros. 2017. Privacy attitudes and privacy behaviour: A review of current research on the privacy paradox phenomenon. Computers & Security 64: 122–134.

Palen, Leysia, and Paul Dourish. 2003. Unpacking “Privacy” for a networked world. NEW HORIZONS 5: 8.

Petronio, Sandra. 1991. Communication boundary management: A theoretical model of managing disclosure of private information between marital couples. Communication Theory 1 (4): 311–335.

Nissenbaum, Helen. 2010. Privacy in Context . Stanford University Press.

Wisniewski, Pamela, Heather Lipford, and David Wilson. 2012. Fighting for my space: Coping mechanisms for SNS boundary regulation. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems – CHI ’12 , ACM Press, 609.

Petronio, S. 2010. Communication Privacy Management Theory: What Do We Know About Family Privacy Regulation? Journal of Family Theory & Review 2 (3): 175–196.

Clemens, Chris, David Atkin, and Archana Krishnan. 2015. The influence of biological and personality traits on gratifications obtained through online dating websites. Computers in Human Behavior 49: 120–129.

Vitak, Jessica, and Nicole B. Ellison. 2013. ‘There’s a network out there you might as well tap’: Exploring the benefits of and barriers to exchanging informational and support-based resources on Facebook. New Media & Society 15 (2): 243–259.

Fogel, Joshua, and Elham Nehmad. 2009. Internet social network communities: Risk taking, trust, and privacy concerns. Computers in Human Behavior 25 (1): 153–160.

Agger, Ben. 2015. Oversharing: Presentations of Self in the Internet Age . Routledge.

Book   Google Scholar  

Krämer, Nicole C., and Nina Haferkamp. 2011. Online self-presentation: Balancing privacy concerns and impression construction on social networking sites. In Privacy Online: Perspectives on Privacy and Self-Disclosure in the Social Web , ed. S. Trepte and L. Reinecke, 127–141. Berlin: Springer.

The University of Central Florida, Wisniewski Pamela, A.K.M. Najmul Islam, et al. 2016. Framing and measuring multi-dimensional interpersonal privacy preferences of social networking site users. Communications of the Association for Information Systems 38: 235–258.

Pamela Wisniewski, A.K.M. Najmul Islam, Bart P. Knijnenburg, and Sameer Patil. 2015. Give social network users the privacy they want. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing , ACM, 1427–1441.

Bouvier, Gwen. 2015. What is a discourse approach to Twitter, Facebook, YouTube and other social media: Connecting with other academic fields? Journal of Multicultural Discourses 10 (2): 149–162.

Altman, Irwin. 1975. The Environment and Social Behavior: Privacy, Personal Space, Territory, and Crowding . Monterey, CA: Brooks/Cole Publishing Company.

Paasonen, Susanna, Ben Light, and Kylie Jarrett. 2019. The dick pic: Harassment, curation, and desire. Social Media + Society 5 (2): 2056305119826126.

Karr-Wisniewski, Pamela, David Wilson, and Heather Richter-Lipford. 2011. A new social order: Mechanisms for social network site boundary regulation. In Americas Conference on Information Systems, AMCIS .

Page, Xinru, Pamela Wisniewski, Bart P. Knijnenburg, and Moses Namara. 2018. Social media’s have-nots: An era of social disenfranchisement. Internet Research 28: 5.

Sleeper, Manya, Rebecca Balebako, Sauvik Das, Amber Lynn McConahy, Jason Wiese, and Lorrie Faith Cranor. 2013. The post that wasn’t: Exploring self-censorship on Facebook. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work , ACM, 793–802.

Das, Sauvik, and Adam Kramer. 2013. Self-censorship on Facebook. Proceedings of the International AAAI Conference on Web and Social Media 7: 1.

Lampinen, Airi, Vilma Lehtinen, Asko Lehmuskallio, and Sakari Tamminen. 2011. We’re in it together: Interpersonal management of disclosure in social network services. In Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems – CHI ’11 , ACM Press, 3217.

Buchanan, Tom, Carina Paine, Adam N. Joinson, and Ulf-Dietrich Reips. 2007. Development of measures of online privacy concern and protection for use on the internet. Journal of the American Society for Information Science & Technology 58 (2): 157–165.

Malhotra, Naresh K., Sung S. Kim, and James Agarwal. 2004. Internet Users’ Information Privacy Concerns (IUIPC): The construct, the scale, and a causal model. Information Systems Research 15 (4): 336–355.

Westin, Alan. 1991. Harris-Equifax Consumer Privacy Survey . Atlanta, GA: Equifax Inc.

Page, Xinru, Reza Ghaiumy Anaraky, and Bart P. Knijnenburg. 2019. How communication style shapes relationship boundary regulation and social media adoption. In Proceedings of the 10th International Conference on Social Media and Society , 126–135.

Page, Xinru, Bart P. Knijnenburg, and Alfred Kobsa. 2013. FYI: Communication style preferences underlie differences in location-sharing adoption and usage. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing , ACM, 153–162.

Page, Xinru Woo. 2014. Factors That Influence Adoption and Use of Location-Sharing Social Media . Irvine: University of California.

Solove, Daniel. 2008. Understanding Privacy . Cambridge, MA: Harvard University Press.

Knijnenburg, B.P., Alfred Kobsa, and Hongxia Jin. 2013. Dimensionality of information disclosure behavior. International Journal of Human-Computer Studies 71 (12): 1144–1162.

Wilkinson, Daricia, Paritosh Bahirat, Moses Namara, et al. 2019. Privacy at a glance: Exploring the effectiveness of screensavers to improve privacy awareness. In Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI). Under Review , ACM.

Joinson, Adam N., Ulf-Dietrich Reips, Tom Buchanan, and Carina B. Paine Schofield. 2010. Privacy, trust, and self-disclosure online. Human–Computer Interaction 25 (1): 1–24.

Nissenbaum, Helen. 2004. Privacy as contextual integrity. Washington Law Review 79: 119–157.

Ramokapane, Kopo M., Gaurav Misra, Jose M. Such, and Sören Preibusch. 2021. Truth or dare: Understanding and predicting how users lie and provide untruthful data online.

Barkhuus, Louise. 2012. The mismeasurement of privacy: Using contextual integrity to reconsider privacy in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems , ACM, 367–376.

Cho, Hichang, Bart Knijnenburg, Alfred Kobsa, and Yao Li. 2018. Collective privacy management in social media: A cross-cultural validation. ACM Transactions on Computer-Human Interaction 25 (3): 17:1–17:33.

Hoy, Mariea Grubbs, and George Milne. 2010. Gender differences in privacy-related measures for young adult Facebook users. Journal of Interactive Advertising 10 (2): 28–45.

Li, Yao, Bart P. Knijnenburg, Alfred Kobsa, and M-H. Carolyn Nguyen. 2015. Cross-cultural privacy prediction. In Workshop “Privacy Personas and Segmentation”, 11th Symposium On Usable Privacy and Security (SOUPS) .

Sheehan, Kim Bartel. 1999. An investigation of gender differences in on-line privacy concerns and resultant behaviors. Journal of Interactive Marketing 13 (4): 24–38.

Wyatt, Sally M.E. 2003. Non-users also matter: The construction of users and non-users of the Internet. Now Users Matter: The Co-construction of Users and Technology : 67–79.

2018. Facebook and Instagram introduce time limit tool. BBC News . Retrieved February 10, 2021 from https://www.bbc.com/news/newsbeat-45030712 .

Andalibi, Nazanin. 2020. Disclosure, privacy, and stigma on social media: Examining non-disclosure of distressing experiences. ACM Transactions on Computer-Human Interaction (TOCHI) 27 (3): 1–43.

Gibbs, Martin, James Meese, Michael Arnold, Bjorn Nansen, and Marcus Carter. 2015. #Funeral and Instagram: Death, social media, and platform vernacular. Information, Communication & Society 18 (3): 255–268.

Download references

Author information

Authors and affiliations.

Brigham Young University, Provo, UT, USA

Xinru Page & Sara Berrios

Department of Computer Science, Clemson University, Clemson, SC, USA

Daricia Wilkinson

Department of Computer Science, University of Central Florida, Orlando, FL, USA

Pamela J. Wisniewski

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Xinru Page .

Editor information

Editors and affiliations.

Clemson University, Clemson, SC, USA

Bart P. Knijnenburg

University of Central Florida, Orlando, FL, USA

Pamela Wisniewski

University of North Carolina at Charlotte, Charlotte, NC, USA

Heather Richter Lipford

School of Social and Behavioral Sciences, Arizona State University, Tempe, AZ, USA

Nicholas Proferes

Bridgewater Associates, Westport, CT, USA

Jennifer Romano

Rights and permissions

Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Reprints and permissions

Copyright information

© 2022 The Author(s)

About this chapter

Page, X., Berrios, S., Wilkinson, D., Wisniewski, P.J. (2022). Social Media and Privacy. In: Knijnenburg, B.P., Page, X., Wisniewski, P., Lipford, H.R., Proferes, N., Romano, J. (eds) Modern Socio-Technical Perspectives on Privacy. Springer, Cham. https://doi.org/10.1007/978-3-030-82786-1_7

Download citation

DOI : https://doi.org/10.1007/978-3-030-82786-1_7

Published : 09 February 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-82785-4

Online ISBN : 978-3-030-82786-1

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Why protecting privacy is a losing game today—and how to change the game

Subscribe to techstream, cameron f. kerry cameron f. kerry ann r. and andrew h. tisch distinguished visiting fellow - governance studies , center for technology innovation @cam_kerry.

July 12, 2018

Recent congressional hearings and data breaches have prompted more legislators and business leaders to say the time for broad federal privacy legislation has come. Cameron Kerry presents the case for adoption of a baseline framework to protect consumer privacy in the U.S.

Kerry explores a growing gap between existing laws and an information Big Bang that is eroding trust. He suggests that recent privacy bills have not been ambitious enough, and points to the Obama administration’s Consumer Privacy Bill of Rights as a blueprint for future legislation. Kerry considers ways to improve that proposal, including an overarching “golden rule of privacy” to ensure people can trust that data about them is handled in ways consistent with their interests and the circumstances in which it was collected.

Table of Contents Introduction: Game change? How current law is falling behind Shaping laws capable of keeping up

  • 31 min read

Introduction: Game change?

There is a classic episode of the show “I Love Lucy” in which Lucy goes to work wrapping candies on an assembly line . The line keeps speeding up with the candies coming closer together and, as they keep getting farther and farther behind, Lucy and her sidekick Ethel scramble harder and harder to keep up. “I think we’re fighting a losing game,” Lucy says.

This is where we are with data privacy in America today. More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system. If we don’t change the rules of the game soon, it will turn into a losing game for our economy and society.

More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system.

The Cambridge Analytica drama has been the latest in a series of eruptions that have caught peoples’ attention in ways that a steady stream of data breaches and misuses of data have not.

The first of these shocks was the Snowden revelations in 2013. These made for long-running and headline-grabbing stories that shined light on the amount of information about us that can end up in unexpected places. The disclosures also raised awareness of how much can be learned from such data (“we kill people based on metadata,” former NSA and CIA Director Michael Hayden said ).

The aftershocks were felt not only by the government, but also by American companies, especially those whose names and logos showed up in Snowden news stories. They faced suspicion from customers at home and market resistance from customers overseas. To rebuild trust, they pushed to disclose more about the volume of surveillance demands and for changes in surveillance laws. Apple, Microsoft, and Yahoo all engaged in public legal battles with the U.S. government.

Then came last year’s Equifax breach that compromised identity information of almost 146 million Americans. It was not bigger than some of the lengthy roster of data breaches that preceded it, but it hit harder because it rippled through the financial system and affected individual consumers who never did business with Equifax directly but nevertheless had to deal with the impact of its credit scores on economic life. For these people, the breach was another demonstration of how much important data about them moves around without their control, but with an impact on their lives.

Now the Cambridge Analytica stories have unleashed even more intense public attention, complete with live network TV cut-ins to Mark Zuckerberg’s congressional testimony. Not only were many of the people whose data was collected surprised that a company they never heard of got so much personal information, but the Cambridge Analytica story touches on all the controversies roiling around the role of social media in the cataclysm of the 2016 presidential election. Facebook estimates that Cambridge Analytica was able to leverage its “academic” research into data on some 87 million Americans (while before the 2016 election Cambridge Analytica’s CEO Alexander Nix boasted of having profiles with 5,000 data points on 220 million Americans). With over two billion Facebook users worldwide, a lot of people have a stake in this issue and, like the Snowden stories, it is getting intense attention around the globe, as demonstrated by Mark Zuckerberg taking his legislative testimony on the road to the European Parliament .

The Snowden stories forced substantive changes to surveillance with enactment of U.S. legislation curtailing telephone metadata collection and increased transparency and safeguards in intelligence collection. Will all the hearings and public attention on Equifax and Cambridge Analytica bring analogous changes to the commercial sector in America?

I certainly hope so. I led the Obama administration task force that developed the “ Consumer Privacy Bill of Rights ” issued by the White House in 2012 with support from both businesses and privacy advocates, and then drafted legislation to put this bill of rights into law. The legislative proposal issued after I left the government did not get much traction, so this initiative remains unfinished business.

The Cambridge Analytica stories have spawned fresh calls for some federal privacy legislation from members of Congress in both parties, editorial boards, and commentators. With their marquee Zuckerberg hearings behind them, senators and congressmen are moving on to think about what do next. Some have already introduced bills and others are thinking about what privacy proposals might look like. The op-eds and Twitter threads on what to do have flowed. Various groups in Washington have been convening to develop proposals for legislation.

This time, proposals may land on more fertile ground. The chair of the Senate Commerce Committee, John Thune (R-SD) said “many of my colleagues on both sides of the aisle have been willing to defer to tech companies’ efforts to regulate themselves, but this may be changing.” A number of companies have been increasingly open to a discussion of a basic federal privacy law. Most notably, Zuckerberg told CNN “I’m not sure we shouldn’t be regulated,” and Apple’s Tim Cook expressed his emphatic belief that self-regulation is no longer viable.

For a while now, events have been changing the way that business interests view the prospect of federal privacy legislation.

This is not just about damage control or accommodation to “techlash” and consumer frustration. For a while now, events have been changing the way that business interests view the prospect of federal privacy legislation. An increasing spread of state legislation on net neutrality, drones, educational technology, license plate readers, and other subjects and, especially broad new legislation in California pre-empting a ballot initiative, have made the possibility of a single set of federal rules across all 50 states look attractive. For multinational companies that have spent two years gearing up for compliance with the new data protection law that has now taken effect in the EU, dealing with a comprehensive U.S. law no longer looks as daunting. And more companies are seeing value in a common baseline that can provide people with reassurance about how their data is handled and protected against outliers and outlaws.

This change in the corporate sector opens the possibility that these interests can converge with those of privacy advocates in comprehensive federal legislation that provides effective protections for consumers. Trade-offs to get consistent federal rules that preempt some strong state laws and remedies will be difficult, but with a strong enough federal baseline, action can be achievable.

how current law is falling behind

Snowden, Equifax, and Cambridge Analytica provide three conspicuous reasons to take action. There are really quintillions of reasons. That’s how fast IBM estimates we are generating digital information, quintillions of bytes of data every day—a number followed by 30 zeros. This explosion is generated by the doubling of computer processing power every 18-24 months that has driven growth in information technology throughout the computer age, now compounded by the billions of devices that collect and transmit data, storage devices and data centers that make it cheaper and easier to keep the data from these devices, greater bandwidth to move that data faster, and more powerful and sophisticated software to extract information from this mass of data. All this is both enabled and magnified by the singularity of network effects—the value that is added by being connected to others in a network—in ways we are still learning.

This information Big Bang is doubling the volume of digital information in the world every two years. The data explosion that has put privacy and security in the spotlight will accelerate. Futurists and business forecasters debate just how many tens of billions of devices will be connected in the coming decades, but the order of magnitude is unmistakable—and staggering in its impact on the quantity and speed of bits of information moving around the globe. The pace of change is dizzying, and it will get even faster—far more dizzying than Lucy’s assembly line.

Most recent proposals for privacy legislation aim at slices of the issues this explosion presents. The Equifax breach produced legislation aimed at data brokers. Responses to the role of Facebook and Twitter in public debate have focused on political ad disclosure, what to do about bots, or limits to online tracking for ads. Most state legislation has targeted specific topics like use of data from ed-tech products, access to social media accounts by employers, and privacy protections from drones and license-plate readers. Facebook’s simplification and expansion of its privacy controls and recent federal privacy bills in reaction to events focus on increasing transparency and consumer choice. So does the newly enacted California Privacy Act.

This information Big Bang is doubling the volume of digital information in the world every two years. The data explosion that has put privacy and security in the spotlight will accelerate. Most recent proposals for privacy legislation aim at slices of the issues this explosion presents.

Measures like these double down on the existing American privacy regime. The trouble is, this system cannot keep pace with the explosion of digital information, and the pervasiveness of this information has undermined key premises of these laws in ways that are increasingly glaring. Our current laws were designed to address collection and storage of structured data by government, business, and other organizations and are busting at the seams in a world where we are all connected and constantly sharing. It is time for a more comprehensive and ambitious approach. We need to think bigger, or we will continue to play a losing game.

Our existing laws developed as a series of responses to specific concerns, a checkerboard of federal and state laws, common law jurisprudence, and public and private enforcement that has built up over more than a century. It began with the famous Harvard Law Review article by (later) Justice Louis Brandeis and his law partner Samuel Warren in 1890 that provided a foundation for case law and state statutes for much of the 20th Century, much of which addressed the impact of mass media on individuals who wanted, as Warren and Brandeis put it, “to be let alone.” The advent of mainframe computers saw the first data privacy laws adopted in 1974 to address the power of information in the hands of big institutions like banks and government: the federal Fair Credit Reporting Act that gives us access to information on credit reports and the Privacy Act that governs federal agencies. Today, our checkerboard of privacy and data security laws covers data that concerns people the most. These include health data, genetic information, student records and information pertaining to children in general, financial information, and electronic communications (with differing rules for telecommunications carriers, cable providers, and emails).

Outside of these specific sectors is not a completely lawless zone. With Alabama adopting a law last April, all 50 states now have laws requiring notification of data breaches (with variations in who has to be notified, how quickly, and in what circumstances). By making organizations focus on personal data and how they protect it, reinforced by exposure to public and private enforcement litigation, these laws have had a significant impact on privacy and security practices. In addition, since 2003, the Federal Trade Commission—under both Republican and Democratic majorities—has used its enforcement authority to regulate unfair and deceptive commercial practices and to police unreasonable privacy and information security practices. This enforcement, mirrored by many state attorneys general, has relied primarily on deceptiveness, based on failures to live up to privacy policies and other privacy promises.

These levers of enforcement in specific cases, as well as public exposure, can be powerful tools to protect privacy. But, in a world of technology that operates on a massive scale moving fast and doing things because one can, reacting to particular abuses after-the-fact does not provide enough guardrails.

As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books. This includes most of the data we generate through such widespread uses as web searches, social media, e-commerce, and smartphone apps. The changes come faster than legislation or regulatory rules can adapt, and they erase the sectoral boundaries that have defined our privacy laws. Take my smart watch, for one example: data it generates about my heart rate and activity is covered by the Health Insurance Portability and Accountability Act (HIPAA) if it is shared with my doctor, but not when it goes to fitness apps like Strava (where I can compare my performance with my peers). Either way, it is the same data, just as sensitive to me and just as much of a risk in the wrong hands.

As the data universe keeps expanding, more and more of it falls outside the various specific laws on the books.

It makes little sense that protection of data should depend entirely on who happens to hold it. This arbitrariness will spread as more and more connected devices are embedded in everything from clothing to cars to home appliances to street furniture. Add to that striking changes in patterns of business integration and innovation—traditional telephone providers like Verizon and AT&T are entering entertainment, while startups launch into the provinces of financial institutions like currency trading and credit and all kinds of enterprises compete for space in the autonomous vehicle ecosystem—and the sectoral boundaries that have defined U.S. privacy protection cease to make any sense.

Putting so much data into so many hands also is changing the nature of information that is protected as private. To most people, “personal information” means information like social security numbers, account numbers, and other information that is unique to them. U.S. privacy laws reflect this conception by aiming at “personally identifiable information,” but data scientists have repeatedly demonstrated that this focus can be too narrow. The aggregation and correlation of data from various sources make it increasingly possible to link supposedly anonymous information to specific individuals and to infer characteristics and information about them. The result is that today, a widening range of data has the potential to be personal information, i.e. to identify us uniquely. Few laws or regulations address this new reality.

Nowadays, almost every aspect of our lives is in the hands of some third party somewhere. This challenges judgments about “expectations of privacy” that have been a major premise for defining the scope of privacy protection. These judgments present binary choices: if private information is somehow public or in the hands of a third party, people often are deemed to have no expectation of privacy. This is particularly true when it comes to government access to information—emails, for example, are nominally less protected under our laws once they have been stored 180 days or more, and articles and activities in plain sight are considered categorically available to government authorities. But the concept also gets applied to commercial data in terms and conditions of service and to scraping of information on public websites, for two examples.

As more devices and sensors are deployed in the environments we pass through as we carry on our days, privacy will become impossible if we are deemed to have surrendered our privacy simply by going about the world or sharing it with any other person. Plenty of people have said privacy is dead, starting most famously with Sun Microsystems’ Scott McNealy back in the 20th century (“you have zero privacy … get over it”) and echoed by a chorus of despairing writers since then. Without normative rules to provide a more constant anchor than shifting expectations, true privacy actually could be dead or dying. The Supreme Court may have something to say on the subject in we will need a broader set of norms to protect privacy in settings that have been considered public. Privacy can endure, but it needs a more enduring foundation.

The Supreme Court in its recent Carpenter decision recognized how constant streams of data about us change the ways that privacy should be protected. In holding that enforcement acquisition of cell phone location records requires a warrant, the Court considered the “detailed, encyclopedic, and effortlessly compiled” information available from cell service location records and “the seismic shifts in digital technology” that made these records available, and concluded that people do not necessarily surrender privacy interests to collect data they generate or by engaging in behavior that can be observed publicly. While there was disagreement among Justices as to the sources of privacy norms, two of the dissenters, Justice Alito and Gorsuch, pointed to “expectations of privacy” as vulnerable because they can erode or be defined away.

How this landmark privacy decision affects a wide variety of digital evidence will play out in criminal cases and not in the commercial sector. Nonetheless, the opinions in the case point to a need for a broader set of norms to protect privacy in settings that have been thought to make information public. Privacy can endure, but it needs a more enduring foundation.

Our existing laws also rely heavily on notice and consent—the privacy notices and privacy policies that we encounter online or receive from credit card companies and medical providers, and the boxes we check or forms we sign. These declarations are what provide the basis for the FTC to find deceptive practices and acts when companies fail to do what they said. This system follows the model of informed consent in medical care and human subject research, where consent is often asked for in person, and was imported into internet privacy in the 1990s. The notion of U.S. policy then was to foster growth of the internet by avoiding regulation and promoting a “ market resolution ” in which individuals would be informed about what data is collected and how it would be processed, and could make choices on this basis.

Maybe informed consent was practical two decades ago, but it is a fantasy today. In a constant stream of online interactions, especially on the small screens that now account for the majority of usage, it is unrealistic to read through privacy policies. And people simply don’t.

It is not simply that any particular privacy policies “suck,” as Senator John Kennedy (R-LA) put it in the Facebook hearings. Zeynep Tufecki is right that these disclosures are obscure and complex . Some forms of notice are necessary and attention to user experience can help, but the problem will persist no matter how well designed disclosures are. I can attest that writing a simple privacy policy is challenging, because these documents are legally enforceable and need to explain a variety of data uses; you can be simple and say too little or you can be complete but too complex. These notices have some useful function as a statement of policy against which regulators, journalists, privacy advocates, and even companies themselves can measure performance, but they are functionally useless for most people, and we rely on them to do too much.

At the end of the day, it is simply too much to read through even the plainest English privacy notice, and being familiar with the terms and conditions or privacy settings for all the services we use is out of the question. The recent flood of emails about privacy policies and consent forms we have gotten with the coming of the EU General Data Protection Regulation have offered new controls over what data is collected or information communicated, but how much have they really added to people’s understanding? Wall Street Journal reporter Joanna Stern attempted to analyze all the ones she received (enough paper printed out to stretch more than the length of a football field), but resorted to scanning for a few specific issues. In today’s world of constant connections, solutions that focus on increasing transparency and consumer choice are an incomplete response to current privacy challenges.

Moreover, individual choice becomes utterly meaningless as increasingly automated data collection leaves no opportunity for any real notice, much less individual consent. We don’t get asked for consent to the terms of surveillance cameras on the streets or “beacons” in stores that pick up cell phone identifiers, and house guests aren’t generally asked if they agree to homeowners’ smart speakers picking up their speech. At best, a sign may be posted somewhere announcing that these devices are in place. As devices and sensors increasingly are deployed throughout the environments we pass through, some after-the-fact access and control can play a role, but old-fashioned notice and choice become impossible.

Ultimately, the familiar approaches ask too much of individual consumers. As the President’s Council of Advisers on Science and Technology Policy found in a 2014 report on big data , “the conceptual problem with notice and choice is that it fundamentally places the burden of privacy protection on the individual,” resulting in an unequal bargain, “a kind of market failure.”

This is an impossible burden that creates an enormous disparity of information between the individual and the companies they deal with. As Frank Pasquale ardently dissects in his “Black Box Society,”   we know very little about how the businesses that collect our data operate. There is no practical way even a reasonably sophisticated person can get arms around the data that they generate and what that data says about them. After all, making sense of the expanding data universe is what data scientists do. Post-docs and Ph.D.s at MIT (where I am a visiting scholar at the Media Lab) as well as tens of thousands of data researchers like them in academia and business are constantly discovering new information that can be learned from data about people and new ways that businesses can—or do—use that information. How can the rest of us who are far from being data scientists hope to keep up?

As a result, the businesses that use the data know far more than we do about what our data consists of and what their algorithms say about us. Add this vast gulf in knowledge and power to the absence of any real give-and-take in our constant exchanges of information, and you have businesses able by and large to set the terms on which they collect and share this data.

Businesses are able by and large to set the terms on which they collect and share this data. This is not a “market resolution” that works.

This is not a “market resolution” that works. The Pew Research Center has tracked online trust and attitudes toward the internet and companies online. When Pew probed with surveys and focus groups in 2016, it found that “while many Americans are willing to share personal information in exchange for tangible benefits, they are often cautious about disclosing their information and frequently unhappy about that happens to that information once companies have collected it.” Many people are “uncertain, resigned, and annoyed.” There is a growing body of survey research in the same vein. Uncertainty, resignation, and annoyance hardly make a recipe for a healthy and sustainable marketplace, for trusted brands, or for consent of the governed.

Consider the example of the journalist Julia Angwin. She spent a year trying to live without leaving digital traces, which she described in her book “Dragnet Nation.” Among other things, she avoided paying by credit card and established a fake identity to get a card for when she couldn’t avoid using one; searched hard to find encrypted cloud services for most email; adopted burner phones that she turned off when not in use and used very little; and opted for paid subscription services in place of ad-supported ones. More than a practical guide to protecting one’s data privacy, her year of living anonymously was an extended piece of performance art demonstrating how much digital surveillance reveals about our lives and how hard it is to avoid. The average person should not have to go to such obsessive lengths to ensure that their identities or other information they want to keep private stays private. We need a fair game.

Shaping laws capable of keeping up

As policymakers consider how the rules might change, the Consumer Privacy Bill of Rights we developed in the Obama administration has taken on new life as a model. The Los Angeles Times , The Economist , and The New York Times all pointed to this bill of rights in urging Congress to act on comprehensive privacy legislation, and the latter said “there is no need to start from scratch …” Our 2012 proposal needs adapting to changes in technology and politics, but it provides a starting point for today’s policy discussion because of the wide input it got and the widely accepted principles it drew on.

The bill of rights articulated seven basic principles that should be legally enforceable by the Federal Trade Commission: individual control, transparency, respect for the context in which the data was obtained, access and accuracy, focused collection, security, and accountability. These broad principles are rooted in longstanding and globally-accepted “fair information practices principles.” To reflect today’s world of billions of devices interconnected through networks everywhere, though, they are intended to move away from static privacy notices and consent forms to a more dynamic framework, less focused on collection and process and more on how people are protected in the ways their data is handled. Not a checklist, but a toolbox. This principles-based approach was meant to be interpreted and fleshed out through codes of conduct and case-by-case FTC enforcement—iterative evolution, much the way both common law and information technology developed.

As policymakers consider how the rules might change, the Consumer Privacy Bill of Rights developed in the Obama administration has taken on new life as a model. The bill of rights articulated seven basic principles that should be legally enforceable by the Federal Trade Commission.

The other comprehensive model that is getting attention is the EU’s newly effective General Data Protection Regulation. For those in the privacy world, this has been the dominant issue ever since it was approved two years ago, but even so, it was striking to hear “the GDPR” tossed around as a running topic of congressional questions for Mark Zuckerberg. The imminence of this law, its application to Facebook and many other American multinational companies, and its contrast with U.S. law made GDPR a hot topic. It has many people wondering why the U.S. does not have a similar law, and some saying the U.S. should follow the EU model.

I dealt with the EU law since it was in draft form while I led U.S. government engagement with the EU on privacy issues alongside developing our own proposal. Its interaction with U.S. law and commerce has been part of my life as an official, a writer and speaker on privacy issues, and a lawyer ever since. There’s a lot of good in it, but it is not the right model for America.

There’s a lot of good in the GDPR, but it is not the right model for America.

What is good about the EU law? First of all, it is a law—one set of rules that applies to all personal data across the EU. Its focus on individual data rights in theory puts human beings at the center of privacy practices, and the process of complying with its detailed requirements has forced companies to take a close look at what data they are collecting, what they use it for, and how they keep it and share it—which has proved to be no small task. Although the EU regulation is rigid in numerous respects, it can be more subtle than is apparent at first glance. Most notably, its requirement that consent be explicit and freely given is often presented in summary reports as prohibiting collecting any personal data without consent; in fact, the regulation allows other grounds for collecting data and one effect of the strict definition of consent is to put more emphasis on these other grounds. How some of these subtleties play out will depend on how 40 different regulators across the EU apply the law, though. European advocacy groups were already pursuing claims against “ les GAFAM ” (Google, Amazon, Facebook, Apple, Microsoft) as the regulation went into effect.

The EU law has its origins in the same fair information practice principles as the Consumer Privacy Bill of Rights. But the EU law takes a much more prescriptive and process-oriented approach, spelling out how companies must manage privacy and keep records and including a “right to be forgotten” and other requirements hard to square with our First Amendment. Perhaps more significantly, it may not prove adaptable to artificial intelligence and new technologies like autonomous vehicles that need to aggregate masses of data for machine learning and smart infrastructure. Strict limits on the purposes of data use and retention may inhibit analytical leaps and beneficial new uses of information. A rule requiring human explanation of significant algorithmic decisions will shed light on algorithms and help prevent unfair discrimination but also may curb development of artificial intelligence. These provisions reflect a distrust of technology that is not universal in Europe but is a strong undercurrent of its political culture.

We need an American answer—a more common law approach adaptable to changes in technology—to enable data-driven knowledge and innovation while laying out guardrails to protect privacy. The Consumer Privacy Bill of Rights offers a blueprint for such an approach.

Sure, it needs work, but that’s what the give-and-take of legislating is about. Its language on transparency came out sounding too much like notice-and-consent, for example. Its proposal for fleshing out the application of the bill of rights had a mixed record of consensus results in trial efforts led by the Commerce Department.

It also got some important things right. In particular, the “respect for context” principle is an important conceptual leap. It says that a people “have a right to expect that companies will collect, use, and disclose personal data in ways that are consistent with the context in which consumers provide the data.” This breaks from the formalities of privacy notices, consent boxes, and structured data and focuses instead on respect for the individual. Its emphasis on the interactions between an individual and a company and circumstances of the data collection and use derives from  the insight of information technology thinker Helen Nissenbaum . To assess privacy interests, “it is crucial to know the context—who is gathering the information, who is analyzing it, who is disseminating and to whom, the nature of the information, the relationships among the various parties, and even larger institutional and social circumstances.”

We need an American answer—a more common law approach adaptable to changes in technology—to enable data-driven knowledge and innovation while laying out guardrails to protect privacy.

Context is complicated—our draft legislation listed 11 different non-exclusive factors to assess context. But that is in practice the way we share information and form expectations about how that information will be handled and about our trust in the handler. We bare our souls and our bodies to complete strangers to get medical care, with the understanding that this information will be handled with great care and shared with strangers only to the extent needed to provide care. We share location information with ride-sharing and navigation apps with the understanding that it enables them to function, but Waze ran into resistance when that functionality required a location setting of “always on.” Danny Weitzner, co-architect of the Privacy Bill of Rights, recently discussed how the respect for context principle “would have prohibited [Cambridge Analytica] from unilaterally repurposing research data for political purposes” because it establishes a right “not to be surprised by how one’s personal data issued.” The Supreme Court’s Carpenter decision opens up expectations of privacy in information held by third parties to variations based on the context.

The Consumer Privacy Bill of Rights does not provide any detailed prescription as to how the context principle and other principles should apply in particular circumstances. Instead, the proposal left such application to case-by-case adjudication by the FTC and development of best practices, standards, and codes of conduct by organizations outside of government, with incentives to vet these with the FTC or to use internal review boards similar to those used for human subject research in academic and medical settings. This approach was based on the belief that the pace of technological change and the enormous variety of circumstances involved need more adaptive decisionmaking than current approaches to legislation and government regulations allow. It may be that baseline legislation will need more robust mandates for standards than the Consumer Privacy Bill of Rights contemplated, but any such mandates should be consistent with the deeply embedded preference for voluntary, collaboratively developed, and consensus-based standards that has been a hallmark of U.S. standards development.

In hindsight, the proposal could use a lodestar to guide the application of its principles—a simple golden rule for privacy: that companies should put the interests of the people whom data is about ahead of their own. In some measure, such a general rule would bring privacy protection back to first principles: some of the sources of law that Louis Brandeis and Samuel Warren referred to in their famous law review article were cases in which the receipt of confidential information or trade secrets led to judicial imposition of a trust or duty of confidentiality. Acting as a trustee carries the obligation to act in the interests of the beneficiaries and to avoid self-dealing.

A Golden Rule of Privacy that incorporates a similar obligation for one entrusted with personal information draws on several similar strands of the privacy debate. Privacy policies often express companies’ intention to be “good stewards of data;” the good steward also is supposed to act in the interests of the principal and avoid self-dealing. A more contemporary law review parallel is Yale law professor Jack Balkin’s concept of “ information fiduciaries ,” which got some attention during the Zuckerberg hearing when Senator Brian Schatz (D-HI) asked Zuckerberg to comment on it. The Golden Rule of Privacy would import the essential duty without importing fiduciary law wholesale. It also resonates with principles of “respect for the individual,” “beneficence,” and “justice” in ethical standards for human subject research that influence emerging ethical frameworks for privacy and data use. Another thread came in Justice Gorsuch’s Carpenter dissent defending property law as a basis for privacy interests: he suggested that entrusting someone with digital information may be a modern equivalent of a “bailment” under classic property law, which imposes duties on the bailee. And it bears some resemblance to the GDPR concept of “ legitimate interest ,” which permits the processing of personal data based on a legitimate interest of the processor, provided that this interest is not outweighed by the rights and interests of the subject of the data.

The fundamental need for baseline privacy legislation in America is to ensure that individuals can trust that data about them will be used, stored, and shared in ways that are consistent with their interests and the circumstances in which it was collected. This should hold regardless of how the data is collected, who receives it, or the uses it is put to. If it is personal data, it should have enduring protection.

The fundamental need for baseline privacy legislation in America is to ensure that individuals can trust that data about them will be used, stored, and shared in ways that are consistent with their interests and the circumstances in which it was collected.

Such trust is an essential building block of a sustainable digital world. It is what enables the sharing of data for socially or economically beneficial uses without putting human beings at risk. By now, it should be clear that trust is betrayed too often, whether by intentional actors like Cambridge Analytica or Russian “ Fancy Bears ,” or by bros in cubes inculcated with an imperative to “ deploy or die .”

Trust needs a stronger foundation that provides people with consistent assurance that data about them will be handled fairly and consistently with their interests. Baseline principles would provide a guide to all businesses and guard against overreach, outliers, and outlaws. They would also tell the world that American companies are bound by a widely-accepted set of privacy principles and build a foundation for privacy and security practices that evolve with technology.

Resigned but discontented consumers are saying to each other, “I think we’re playing a losing game.” If the rules don’t change, they may quit playing.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Artificial Intelligence Internet & Telecommunications Privacy

Courts & Law

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative The Privacy Debate

Joshua P. Meltzer

November 1, 2023

Jordan Famularo, Richmond Wong

October 27, 2022

Nicol Turner Lee, Caitlin Chin-Rothmann

April 7, 2022

Privacy Issues with Social Media

writer-avatar

This essay will examine the privacy concerns associated with social media use. It will discuss how personal information is collected, used, and potentially misused by social media platforms and third parties. The piece will explore the implications of data breaches, targeted advertising, and the erosion of privacy boundaries. Additionally, it will offer insights into how users can protect their privacy online and the ongoing debate over regulation and user rights in the digital age. Additionally, PapersOwl presents more free essays samples linked to Social Media.

How it works

In the 21st century, sharing posts and texting on social media such as Facebook and Instagram has become part of people’s daily life. However, when this personal information is continuously being uploaded on internet, not only can your personal circle see it, but so can everyone else in the world, such as criminals and intelligence agencies. Although some might believe that privacy settings can be controlled by the content creator, in recent events it is clear that privacy is no longer a choice of an individual.

Moreover, there are no existing laws that are able to effectively stop our private communication and information from being disclosed to the third party. For example, viral content, if unnoted by the owner, could potentially result in serious breaches of privacy and create in-person dilemmas unforeseen on an online platform.

People in the digital age must begin to advocate for a more sensitized environment concerning where the boundaries of privacy should be and be aware of how much is actually being controlled by the practical user. Should we have an expectation of privacy when we use social media? Ideally, we should. Privacy is a basic human right that everyone deserves, and it should not be restricted due to the progression of technology. However, the reality is that inhabitants in this complex digital ecosystem are gradually losing their privacy. Digital citizen’s not only have their personal information constantly stored, but their everyday movements are transparent to the public as well. Therefore, users on social networking sites reserve the right to be aware that everything posted on or passing through the internet is at high risks to be exposed to others, no matter what your privacy setting is.

Moreover, the government should take action to properly regulate privacy conditions on social media, preventing social media companies, law enforcement agencies, and criminals from illegally using and monitoring personal information. According to the official “Company Info” of Facebook, which has been deemed as one of the most popular social networking website worldwide, there are 2.23 billion users in 2018, and nearly half of them use Facebook every single day (2018). This also means there are 2.23 billion people in the world “agree” to Facebook’s privacy condition, though most of them might never actually fully read said terms and conditions. Therefore, those casual users may never know that users are actually responsible to take their own actions if they desire their own online privacy.

As stated in a news article exposing Facebook’s privacy settings, “to opt out of full disclosure of most information, it is necessary to click through more than 50 privacy buttons, which then require choosing among a total of more than 170 options” (Bilton, 2010). In other words, users have to spend a considerable amount of time to protect their privacy, their basic human rights, even though many of them are unaware that loopholes even exist. Nonetheless, even if a user changes all the privacy settings on the website, some pieces of information are still vulnerable to be stolen. For example, there is a function called “community pages”, which “automatically links personal data, like hometown or university, to topic pages for that town or university” (Bilton, 2010). Overall, if users are not aware of these details, their personal data can be easily accessed by anyone with greater knowledge of the computer system or database. However, what should be most importantly noted is that overall users are given a false sense of security.

Additionally, private policies are usually inscrutable for normal people. “Facebook’s privacy contract is 5830 words long,” written in incomprehensible legal language (Bilton, 2010). Any normal person would have difficulty to understand it. Even if some users could and would actually read it, they have no opportunity to negotiate with it. And even if they read it, most likely many only have the option to accept the sketchy terms that would violate their own rights in order to stay in touch with their friends. That is our current reality. In fact, according to Lee Rainie’s report, a 2014 survey from Pew research center found that “80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms” (2018). This demonstrates that most social networking sites, including Facebook, tend to expose as much material as possible to attract a greater audience, as well as businesses, to maximize their profits and popularity.

Although it is natural for companies to be motivated by profit and boosting their marketing revenue, it is undeniable that human rights are unfortunately being swept under the rug as a result. Are there any existing laws that could protect us against these crises? Unfortunately, despite privacy issues having been constantly brought to public’s attention recently, the law is currently of little help to protect users’ privacy. Based on Semitsu’s research, a professor from University of San Diego School of Law, “a warrant is only necessary to compel disclosure of inbox and outbox messages less than 181 days old, based on Facebook’s own interpretation of federal privacy laws” (2011). Semitsu reveals that what we usually think is “personal” is not actually true because social media companies often benefit through the grey areas of law. However, even if Facebook adopted the clearest of policies, for now, user data is still at a high risk of being disclosed.

The first main reason is that federal courts have failed to properly adapt the Fourth Amendment law to the realities of digital culture. Second, is that Congress has failed to meaningfully revise the Electronic Communications Privacy Act (ECPA) for over a quarter century (Semitsu, 2011). From these facts, it is reasonable to conclude there are no solidly existing law or laws that could regulate and control privacy issues on social networking sites. Hence, I assert that the government should take action to protect citizen’s privacy on social media as soon as possible because we are reaching a point in technology where the fine line between on-screen and off-screen are becoming meshed together. Social media surveillance from government agencies is another surrounding controversial issue. Some might think that it is legitimate protection mechanism for polices and intelligence agencies to trace our posts and online activities because that type of information is already somewhat public. For instance, Gillespie, a professor from Lancaster University Law School, states that “when postings are public and available for all to see it is unlikely that it could be concluded that the viewing of the information is covert in that there must be an awareness that those in authority could look at the postings” (Gillespie, 2009).

However, there might be more to consider than what we originally thought. First, information that the government can monitor might be far more than those that are considered “public” by a normal user. As I have described above, according to Semitsu’s report, except for inbox and outbox messages less than 181 days old, “everything else can be obtained with subpoenas that do not even require reasonable suspicion” (Semistsu, 2011). This could threaten people’s freedom of speech and other rights that are supposedly protected by the law, which is especially dangerous for those who hold unpopular perspectives and support minorities causes. Some might doubt the necessity and importance of privacy.

The common saying goes, “if you did not do anything wrong, then you have nothing to worry”. However, I question that statement with the rebuttal of who gets to define the boundary between “wrong” and “right”? What if your positions are against the government? A prime example is Edward Snowden. Snowden, a former employee of CIA who leaked government surveillance programs to the public, has asserted that “[a]rguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say” (Snowden, 2015). In fact, Scott explains in his research that the Department of Homeland Security has actually began monitoring activities, even events expected to be peaceful, related to “Black Lives Matter” on social media accounts including Facebook, Twitter, and Vine since the protest started in Ferguson (Scott, 2017).

In fact, report also showed that “DHS previously contracted with General Dynamics to monitor, in general, the news, specifically social media, for any reports that reflected badly on DHS or the U.S. Government” (Scott, 2017). All in all, government surveillance could pose a profound impact on basic human rights. Collecting data and monitoring movements from any normal citizen was not an easy task in the past, at least not as simple as today, so there might be no policies to set an adequate boundary of government surveillances on internet. However, if we constantly lack new regulations to properly maintain our technological community, the freedom that we relish in person today may not have the same outcome online.

Though people’s privacy is supposed to be protected, some may argue that social media users should not expect privacy since they have disclosed their personal data and private life “voluntarily”. This might sound approvable at first glance, but this is actually not an excuse for social media companies, authorities and others to access and use people’s information in unwarranted situations. First, it is quite difficult for the majority of online citizens to completely opt out of social media and online communication in our modern society. Not only because of social interactions with friends and relatives, but also because of integral sites like LinkedIn and other imperative sites for job postings that normalize an individual in our society today. I personally have an experience of joining a new social media due to a course requirement in school. Second, even if one could avoid to use any social media, their data might still be disclosed due to posts created by other users.

For example, “[o]n closed Facebook profiles, a photo might be ‘tagged’ with the name of a person who might not even themselves have a Facebook account, and so have no access or notice to remove the tag” (Edwards & Urquhart, 2016). Therefore, information could still be collected from someone random on the internet, and there is no way to stop everyone that has your photo and data to upload it on social media. To sum up, contrary to common belief, whether someone chooses to join a certain social media platform or not and what to disclose on it, simply are not voluntary choices in our current day and age. It is not a valid argument to say that social media users do not deserve protection of privacy because they chose to share their own information with others. In conclusion, our lives today are written out in these thousands and millions of Facebook posts, Facebook Messenger texts and Instagram photos.

These social media platforms create an extraordinary networking for us to connect with others both on a different personal level, but also to connect with others in a different time zone or across the globe. Yet, at the same time of enjoying this glorious and complex internet ecosystem, we should also be aware of our privacy, especially those that are generally considered to be private by users such as inbox and outbox messages. Because social media content is controlled by the company of that platform, even if users restrict access to their materials, they are still disclosed to as least one third party.

However, to say that we give up our rights of privacy on social platforms because we are not experts in law and are not able to negotiate the terms and conditions with large companies seems surreal. To say that we should not expect privacy and allow companies, authorities and criminals access our data because we and our friends join social networking sites seems like an unreasonable bargaining tool, hijacking the common man. Putting all the points above together, we should keep fighting for the revision of laws that regulate information securities and privacy before we have to accept the Hobson’s choice: either break off all connections and benefits on social media or give up our rights of privacy.

  • Bilton, N. (2010, May 12). The Price of Facebook Privacy? Start Clicking. Retrieved from https://www.nytimes.com/2010/05/13/technology/personaltech/13basics.html
  • Company Info. (2018.). Retrieved from https://newsroom.fb.com/company-info/
  • Edwards, L., & Urquhart, L. (2016, September 1). Privacy in public spaces: what expectations of privacy do we have in social media intelligence? International Journal of Law and Information Technology.
  • Gillespie, A. (2009). Regulation of online surveillance. Rainie, L. (2018, March 27). How Americans feel about social media and privacy. Retrieved from http://www.pewresearch.org/fact-tank/2018/03/27/americans-complicated-feelings-about-social-media-in-an-era-of-privacy-concerns/
  • Semitsu, J. (2011.). From Facebook to Mug Shot: How the Dearth of Social Networking Privacy Rights Revolutionized Online Government Surveillance.
  • Scott, J. (2017). Social Media and Government Surveillance: The Case for Better Privacy Protections for Our Newest Public Space.
  • Snowden, E. (2015.). R/IAmA – Just days left to kill mass surveillance under Section 215 of the Patriot Act. We are Edward Snowden and the ACLU’s Jameel Jaffer. AUA. Retrieved from https://www.reddit.com/r/IAmA/comments/36ru89/just_days_left_to_kill_mass_surveillance_under/crglgh2/   

owl

Cite this page

Privacy Issues with Social Media. (2021, Apr 12). Retrieved from https://papersowl.com/examples/privacy-issues-with-social-media/

"Privacy Issues with Social Media." PapersOwl.com , 12 Apr 2021, https://papersowl.com/examples/privacy-issues-with-social-media/

PapersOwl.com. (2021). Privacy Issues with Social Media . [Online]. Available at: https://papersowl.com/examples/privacy-issues-with-social-media/ [Accessed: 20 May. 2024]

"Privacy Issues with Social Media." PapersOwl.com, Apr 12, 2021. Accessed May 20, 2024. https://papersowl.com/examples/privacy-issues-with-social-media/

"Privacy Issues with Social Media," PapersOwl.com , 12-Apr-2021. [Online]. Available: https://papersowl.com/examples/privacy-issues-with-social-media/. [Accessed: 20-May-2024]

PapersOwl.com. (2021). Privacy Issues with Social Media . [Online]. Available at: https://papersowl.com/examples/privacy-issues-with-social-media/ [Accessed: 20-May-2024]

Don't let plagiarism ruin your grade

Hire a writer to get a unique paper crafted to your needs.

owl

Our writers will help you fix any mistakes and get an A+!

Please check your inbox.

You can order an original essay written according to your instructions.

Trusted by over 1 million students worldwide

1. Tell Us Your Requirements

2. Pick your perfect writer

3. Get Your Paper and Pay

Hi! I'm Amy, your personal assistant!

Don't know where to start? Give me your paper requirements and I connect you to an academic expert.

short deadlines

100% Plagiarism-Free

Certified writers

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Teens, Social Media, and Privacy

Table of Contents

  • Acknowledgements
  • Introduction
  • Part 1: Teens and Social Media Use
  • Part 2: Information Sharing, Friending, and Privacy Settings on Social Media
  • Part 3: Reputation Management on Social Media
  • Part 4: Putting Privacy Practices in Context: A Portrait of Teens’ Experiences Online

Teens share a wide range of information about themselves on social media sites; 1 indeed the sites themselves are designed to encourage the sharing of information and the expansion of networks. However, few teens embrace a fully public approach to social media. Instead, they take an array of steps to restrict and prune their profiles, and their patterns of reputation management on social media vary greatly according to their gender and network size. These are among the key findings from a new report based on a survey of 802 teens that examines teens’ privacy management on social media sites:

  • Teens are sharing more information about themselves on social media sites than they did in the past. For the five different types of personal information that we measured in both 2006 and 2012, each is significantly more likely to be shared by teen social media users in our most recent survey.

Teen Twitter use has grown significantly: 24% of online teens use Twitter, up from 16% in 2011.

The typical (median) teen facebook user has 300 friends, while the typical teen twitter user has 79 followers..

  • Focus group discussions with teens show that they have waning enthusiasm for Facebook, disliking the increasing adult presence, people sharing excessively, and stressful “drama,” but they keep using it because participation is an important part of overall teenage socializing.

60% of teen Facebook users keep their profiles private, and most report high levels of confidence in their ability to manage their settings.

Teens take other steps to shape their reputation, manage their networks, and mask information they don’t want others to know; 74% of teen social media users have deleted people from their network or friends list., teen social media users do not express a high level of concern about third-party access to their data; just 9% say they are “very” concerned., on facebook, increasing network size goes hand in hand with network variety, information sharing, and personal information management..

  • In broad measures of online experience, teens are considerably more likely to report positive experiences than negative ones. For instance, 52% of online teens say they have had an experience online that made them feel good about themselves.

Teens are sharing more information about themselves on social media sites than they did in the past.

Teens are increasingly sharing personal information on social media sites, a trend that is likely driven by the evolution of the platforms teens use as well as changing norms around sharing. A typical teen’s MySpace profile from 2006 was quite different in form and function from the 2006 version of Facebook as well as the Facebook profiles that have become a hallmark of teenage life today. For the five different types of personal information that we measured in both 2006 and 2012, each is significantly more likely to be shared by teen social media users on the profile they use most often.

  • 91% post a photo of themselves , up from 79% in 2006.
  • 71% post their school name , up from 49%.
  • 71% post the city or town where they live , up from 61%.
  • 53% post their email address , up from 29%.
  • 20% post their cell phone number , up from 2%.

In addition to the trend questions, we also asked five new questions about the profile teens use most often and found that among teen social media users:

  • 92% post their real name to the profile they use most often. 2
  • 84% post their interests , such as movies, music, or books they like.
  • 82% post their birth date .
  • 62% post their relationship status .
  • 24% post videos of themselves .

Figure 1 teens and social media

Older teens are more likely than younger teens to share certain types of information, but boys and girls tend to post the same kind of content.

Generally speaking, older teen social media users (ages 14-17), are more likely to share certain types of information on the profile they use most often when compared with younger teens (ages 12-13).

Older teens who are social media users more frequently share:

  • Photos of themselves on their profile (94% older teens vs. 82% of younger teens)
  • Their school name (76% vs. 56%)
  • Their relationship status (66% vs. 50%)
  • Their cell phone number (23% vs. 11%)

While boys and girls generally share personal information on social media profiles at the same rates, cell phone numbers are a key exception.  Boys are significantly more likely to share their numbers than girls (26% vs. 14%). This is a difference that is driven by older boys. Various differences between white and African-American social media-using teens are also significant, with the most notable being the lower likelihood that African-American teens will disclose their real names on a social media profile (95% of white social media-using teens do this vs. 77% of African-American teens). 3

16% of teen social media users have set up their profile to automatically include their location in posts.

Beyond basic profile information, some teens choose to enable the automatic inclusion of location information when they post. Some 16% of teen social media users said they set up their profile or account so that it automatically includes their location in posts. Boys and girls and teens of all ages and socioeconomic backgrounds are equally likely to say that they have set up their profile to include their location when they post. Focus group data suggests that many teens find sharing their location unnecessary and unsafe, while others appreciate the opportunity to signal their location to friends and parents.

Twitter draws a far smaller crowd than Facebook for teens, but its use is rising. One in four online teens uses Twitter in some way. While overall use of social networking sites among teens has hovered around 80%, Twitter grew in popularity; 24% of online teens use Twitter, up from 16% in 2011 and 8% the first time we asked this question in late 2009.

African-American teens are substantially more likely to report using Twitter when compared with white youth.

Continuing a pattern established early in the life of Twitter, African-American teens who are internet users are more likely to use the site when compared with their white counterparts. Two in five (39%) African-American teens use Twitter, while 23% of white teens use the service.

Public accounts are the norm for teen Twitter users.

While those with Facebook profiles most often choose private settings, Twitter users, by contrast, are much more likely to have a public account.

  • 64% of teens with Twitter accounts say that their tweets are public, while 24% say their tweets are private.
  • 12% of teens with Twitter accounts say that they “don’t know” if their tweets are public or private.
  • While boys and girls are equally likely to say their accounts are public, boys are significantly more likely than girls to say that they don’t know (21% of boys who have Twitter accounts report this, compared with 5% of girls).

Overall, teens have far fewer followers on Twitter when compared with Facebook friends; the typical (median) teen Facebook user has 300 friends, while the typical (median) teen Twitter user has 79 followers. Girls and older teens tend to have substantially larger Facebook friend networks compared with boys and younger teens.

Teens’ Facebook friendship networks largely mirror their offline networks. Seven in ten say they are friends with their parents on Facebook.

Teens, like other Facebook users, have different kinds of people in their online social networks. And how teens construct that network has implications for who can see the material they share in those digital social spaces:

  • 98% of Facebook-using teens are friends with people they know from school.
  • 91% of teen Facebook users are friends with members of their extended family.
  • 89% are connected to friends who do not attend the same school.
  • 76% are Facebook friends with brothers and sisters.
  • 70% are Facebook friends with their parents.
  • 33% are Facebook friends with other people they have not met in person.
  • 30% have teachers or coaches as friends in their network.
  • 30% have celebrities, musicians or athletes in their network.

Older teens tend to be Facebook friends with a larger variety of people, while younger teens are less likely to friend certain groups, including those they have never met in person.

Older teens are more likely than younger ones to have created broader friend networks on Facebook. Older teens (14-17) who use Facebook are more likely than younger teens (12-13) to be connected with:

  • Friends who go to different schools (92% vs. 82%)
  • People they have never met in person, not including celebrities (36% vs. 25%)
  • Teachers or coaches (34% vs. 19%)

Girls are also more likely than boys (37% vs. 23%) to be Facebook friends with coaches or teachers, the only category of Facebook friends where boys and girls differ.

African-American youth are nearly twice as likely as whites to be Facebook friends with celebrities, athletes, or musicians (48% vs. 25%).

Focus group discussions with teens show that they have waning enthusiasm for Facebook.

In focus groups, many teens expressed waning enthusiasm for Facebook. They dislike the increasing number of adults on the site, get annoyed when their Facebook friends share inane details, and are drained by the “drama” that they described as happening frequently on the site. The stress of needing to manage their reputation on Facebook also contributes to the lack of enthusiasm. Nevertheless, the site is still where a large amount of socializing takes place, and teens feel they need to stay on Facebook in order to not miss out.

Users of sites other than Facebook express greater enthusiasm for their choice.

Those teens who used sites like Twitter and Instagram reported feeling like they could better express themselves on these platforms, where they felt freed from the social expectations and constraints of Facebook. Some teens may migrate their activity and attention to other sites to escape the drama and pressures they find on Facebook, although most still remain active on Facebook as well.

Teens have a variety of ways to make available or limit access to their personal information on social media sites. Privacy settings are one of many tools in a teen’s personal data management arsenal. Among teen Facebook users, most choose private settings that allow only approved friends to view the content that they post.

Most keep their Facebook profile private. Girls are more likely than boys to restrict access to their profiles.

Some 60% of teens ages 12-17 who use Facebook say they have their profile set to private, so that only their friends can see it. Another 25% have a partially private profile, set so that friends of their friends can see what they post. And 14% of teens say that their profile is completely public. 4

  • Girls who use Facebook are substantially more likely than boys to have a private (friends only) profile (70% vs. 50%).
  • By contrast, boys are more likely than girls to have a fully public profile that everyone can see (20% vs. 8%).

Most teens express a high level of confidence in managing their Facebook privacy settings.

More than half (56%) of teen Facebook users say it’s “not difficult at all” to manage the privacy controls on their Facebook profile, while one in three (33%) say it’s “not too difficult.” Just 8% of teen Facebook users say that managing their privacy controls is “somewhat difficult,” while less than 1% describe the process as “very difficult.”

Teens’ feelings of efficacy increase with age:

  • 41% of Facebook users ages 12-13 say it is “not difficult at all” to manage their privacy controls, compared with 61% of users ages 14-17.
  • Boys and girls report similar levels of confidence in managing the privacy controls on their Facebook profile.

For most teen Facebook users, all friends and parents see the same information and updates on their profile.

Beyond general privacy settings, teen Facebook users have the option to place further limits on who can see the information and updates they post. However, few choose to customize in that way: Among teens who have a Facebook account, only 18% say that they limit what certain friends can see on their profile. The vast majority (81%) say that all of their friends see the same thing on their profile. 5 This approach also extends to parents; only 5% of teen Facebook users say they limit what their parents can see.

Teens are cognizant of their online reputations, and take steps to curate the content and appearance of their social media presence. For many teens who were interviewed in focus groups for this report, Facebook was seen as an extension of offline interactions and the social negotiation and maneuvering inherent to teenage life. “Likes” specifically seem to be a strong proxy for social status, such that teen Facebook users will manipulate their profile and timeline content in order to garner the maximum number of “likes,” and remove photos with too few “likes.”

Pruning and revising profile content is an important part of teens’ online identity management.

Teen management of their profiles can take a variety of forms – we asked teen social media users about five specific activities that relate to the content they post and found that:

  • 59% have deleted or edited something that they posted in the past.
  • 53% have deleted comments from others on their profile or account.
  • 45% have removed their name from photos that have been tagged to identify them.
  • 31% have deleted or deactivated an entire profile or account.
  • 19% have posted updates, comments, photos, or videos that they later regretted sharing.

74% of teen social media users have deleted people from their network or friends’ list; 58% have blocked people on social media sites.

Given the size and composition of teens’ networks, friend curation is also an integral part of privacy and reputation management for social media-using teens. The practice of friending, unfriending, and blocking serve as privacy management techniques for controlling who sees what and when. Among teen social media users:

  • Girls are more likely than boys to delete friends from their network (82% vs. 66%) and block people (67% vs. 48%).
  • Unfriending and blocking are equally common among teens of all ages and across all socioeconomic groups.
  • 58% of teen social media users say they share inside jokes or cloak their messages in some way.

As a way of creating a different sort of privacy, many teen social media users will obscure some of their updates and posts, sharing inside jokes and other coded messages that only certain friends will understand:

  • Older teens are considerably more likely than younger teens to say that they share inside jokes and coded messages that only some of their friends understand (62% vs. 46%).

26% say that they post false information like a fake name, age, or location to help protect their privacy.

One in four (26%) teen social media users say that they post fake information like a fake name, age or location to help protect their privacy.

  • African-American teens who use social media are more likely than white teens to say that they post fake information to their profiles (39% vs. 21%).

Overall, 40% of teen social media users say they are “very” or “somewhat” concerned that some of the information they share on social networking sites might be accessed by third parties like advertisers or businesses without their knowledge. However, few report a high level of concern; 31% say that they are “somewhat” concerned, while just 9% say that they are “very” concerned. 6 Another 60% in total report that they are “not too” concerned (38%) or “not at all” concerned (22%).

  • Younger teen social media users (12-13) are considerably more likely than older teens (14-17) to say that they are “very concerned” about third party access to the information they share (17% vs. 6%).

Insights from our focus groups suggest that some teens may not have a good sense of whether the information they share on a social media site is being used by third parties.

Parents, by contrast, express high levels of concern about how much information advertisers can learn about their children’s behavior online..

Parents of the surveyed teens were asked a related question: “How concerned are you about how much information advertisers can learn about your child’s online behavior?” A full 81% of parents report being “very” or “somewhat” concerned, with 46% reporting they are “very concerned.”  Just 19% report that they are not too concerned or not at all concerned about how much advertisers could learn about their child’s online activities.

Teens who are concerned about third party access to their personal information are also more likely to engage in online reputation management.

Teens who are somewhat or very concerned that some of the information they share on social network sites might be accessed by third parties like advertisers or businesses without their knowledge more frequently delete comments, untag themselves from photos or content, and deactivate or delete their entire account.  Among teen social media users, those who are “very” or “somewhat” concerned about third party access are more likely than less concerned teens to:

  • Delete comments that others have made on their profile (61% vs. 49%).
  • Untag themselves in photos (52% vs. 41%).
  • Delete or deactivate their profile or account (38% vs. 25%).
  • Post updates, comments, photos or videos that they later regret (26% vs. 14%).

Teens with larger Facebook networks are more frequent users of social networking sites and tend to have a greater variety of people in their friend networks. They also share a wider range of information on their profile when compared with those who have a smaller number of friends on the site. Yet even as they share more information with a wider range of people, they are also more actively engaged in maintaining their online profile or persona.

Teens with large Facebook friend networks are more frequent social media users and participate on a wider diversity of platforms in addition to Facebook.

Teens with larger Facebook networks are fervent social media users who exhibit a greater tendency to “diversify” their platform portfolio:

  • 65% of teens with more than 600 friends on Facebook say that they visit social networking sites several times a day, compared with 27% of teens with 150 or fewer Facebook friends.
  • Teens with more than 600 Facebook friends are more than three times as likely to also have a Twitter account when compared with those who have 150 or fewer Facebook friends (46% vs. 13%). They are six times as likely to use Instagram (12% vs. 2%).

Teens with larger Facebook networks tend to have more variety within those networks.

Almost all Facebook users (regardless of network size) are friends with their schoolmates and extended family members. However, other types of people begin to appear as the size of teens’ Facebook networks expand:

  • Teen Facebook users with more than 600 friends in their network are much more likely than those with smaller networks to be Facebook friends with peers who don’t attend their own school, with people they have never met in person (not including celebrities and other “public figures”), as well as with teachers or coaches.
  • On the other hand, teens with the largest friend networks are actually less likely to be friends with their parents on Facebook when compared with those with the smallest networks (79% vs. 60%).

Teens with large networks share a wider range of content, but are also more active in profile pruning and reputation management activities.

Teens with the largest networks (more than 600 friends) are more likely to include a photo of themselves, their school name, their relationship status, and their cell phone number on their profile when compared with teens who have a relatively small number of friends in their network (under 150 friends). However, teens with large friend networks are also more active reputation managers on social media.

  • Teens with larger friend networks are more likely than those with smaller networks to block other users, to delete people from their friend network entirely, to untag photos of themselves, or to delete comments others have made on their profile.
  • They are also substantially more likely to automatically include their location in updates and share inside jokes or coded messages with others.

In broad measures of online experience, teens are considerably more likely to report positive experiences than negative ones.

In the current survey, we wanted to understand the broader context of teens’ online lives beyond Facebook and Twitter. A majority of teens report positive experiences online, such as making friends and feeling closer to another person, but some do encounter unwanted content and contact from others.

  • 52% of online teens say they have had an experience online that made them feel good about themselves. Among teen social media users, 57% said they had an experience online that made them feel good, compared with 30% of teen internet users who do not use social media.
  • One in three online teens (33%) say they have had an experience online that made them feel closer to another person. Looking at teen social media users, 37% report having an experience somewhere online that made them feel closer to another person, compared with just 16% of online teens who do not use social media.

One in six online teens say they have been contacted online by someone they did not know in a way that made them feel scared or uncomfortable.

Unwanted contact from strangers is relatively uncommon, but 17% of online teens report some kind of contact that made them feel scared or uncomfortable. 7 Online girls are more than twice as likely as boys to report contact from someone they did not know that made them feel scared or uncomfortable (24% vs. 10%).

Few internet-using teens have posted something online that caused problems for them or a family member, or got them in trouble at school.

A small percentage of teens have engaged in online activities that had negative repercussions for them or their family; 4% of online teens say they have shared sensitive information online that later caused a problem for themselves or other members of their family. Another 4% have posted information online that got them in trouble at school.

More than half of internet-using teens have decided not to post content online over reputation concerns.

More than half of online teens (57%) say they have decided not to post something online because they were concerned it would reflect badly on them in the future. Teen social media users are more likely than other online teens who do not use social media to say they have refrained from sharing content due to reputation concerns (61% vs. 39%).

Large numbers of youth have lied about their age in order to gain access to websites and online accounts.

In 2011, we reported that close to half of online teens (44%) admitted to lying about their age at one time or another so they could access a website or sign up for an online account. In the latest survey, 39% of online teens admitted to falsifying their age in order gain access to a website or account, a finding that is not significantly different from the previous survey.

Close to one in three online teens say they have received online advertising that was clearly inappropriate for their age.

Exposure to inappropriate advertising online is one of the many risks that parents, youth advocates, and policy makers are concerned about. Yet, little has been known until now about how often teens encounter online ads that they feel are intended for more (or less) mature audiences. In the latest survey, 30% of online teens say they have received online advertising that is “clearly inappropriate” for their age.

About the survey and focus groups

These findings are based on a nationally representative phone survey run by the Pew Research Center’s Internet & American Life Project of 802 parents and their 802 teens ages 12-17. It was conducted between July 26 and September 30, 2012. Interviews were conducted in English and Spanish and on landline and cell phones. The margin of error for the full sample is ± 4.5 percentage points.

This report marries that data with insights and quotes from in-person focus groups conducted by the Youth and Media team at the Berkman Center for Internet & Society at Harvard University beginning in February 2013. The focus groups focused on privacy and digital media, with special emphasis on social media sites. The team conducted 24 focus group interviews with 156 students across the greater Boston area, Los Angeles (California), Santa Barbara (California), and Greensboro (North Carolina). Each focus group lasted 90 minutes, including a 15-minute questionnaire completed prior to starting the interview, consisting of 20 multiple-choice questions and 1 open-ended response. Although the research sample was not designed to constitute representative cross-sections of particular population(s), the sample includes participants from diverse ethnic, racial, and economic backgrounds. Participants ranged in age from 11 to 19. The mean age of participants is 14.5.

In addition, two online focus groups of teenagers ages 12-17 were conducted by the Pew Internet Project from June 20-27, 2012 to help inform the survey design. The first focus group was with 11 middle schoolers ages 12-14, and the second group was with nine high schoolers ages 14-17. Each group was mixed gender, with some racial, socio-economic, and regional diversity. The groups were conducted as an asynchronous threaded discussion over three days using an online platform and the participants were asked to log in twice per day.

Throughout this report, this focus group material is highlighted in several ways. Pew’s online focus group quotes are interspersed with relevant statistics from the survey in order to illustrate findings that were echoed in the focus groups or to provide additional context to the data. In addition, at several points, there are extensive excerpts boxed off as standalone text boxes that elaborate on a number of important themes that emerged from the in-person focus groups conducted by the Berkman Center.

  • We use “social media site” as the umbrella term that refers to social networking sites (like Facebook, LinkedIn, and Google Plus) as well as to information- and media-sharing sites that users may not think of in terms of networking such as Twitter, Instagram, and Tumblr. “Teen social media users” are teens who use any social media site(s). When we use “social networking sites” or “social networking sites and Twitter,” it will be to maintain the original wording when reporting survey results. ↩
  • Given that Facebook is now the dominant platform for teens, and a first and last name is required when creating an account, this is undoubtedly driving the nearly universal trend among teen social media users to say they post their real name to the profile they use most often. Fake accounts with fake names can still be created on Facebook, but the practice is explicitly forbidden in Facebook’s Terms of Service. ↩
  • The sample size for African-American teens who use social media is relatively small (n=95), but all differences between white and African-American teen social media users noted throughout this section are statistically significant. ↩
  • In 2011, the privacy settings question was asked of all teen SNS or Twitter users, prompting them to think about the “profile they use most often.” Among this group 62% reported having a private profile, 19% said their profile was partially private, and 17% said their profile was public. At the time, almost all of these teen social media users (93%) said they had a Facebook account, but some respondents could have been reporting settings for other platforms. ↩
  • This behavior is consistent, regardless of the general privacy settings on a teen’s profile. ↩
  • Recent research has described a “control paradox” that may influence user behavior and attitudes toward information disclosures online. In spaces where users feel they have control over the publication of their private information, they may “give less importance to control (or lack thereof) of the accessibility and use of that information by others.” See, Laura Brandimarte, et al.: “Misplaced Confidences: Privacy and the Control Paradox.” ↩
  • This question does not reference sexual solicitations and could include an array of contact that made the teen feel scared or uncomfortable. ↩

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Online Privacy & Security
  • Platforms & Services
  • Privacy Rights
  • Social Media
  • Teens & Tech
  • Teens & Youth

WhatsApp and Facebook dominate the social media landscape in middle-income nations

5 facts about how americans use facebook, two decades after its launch, americans’ social media use, teens and social media fact sheet, teens, social media and technology 2023, most popular, report materials.

  • Interactive : How Teens Share Information on Social Media
  • Interactive : Teens on Facebook: What They Share with Friends
  • Infographic : Teens, Social Media, and Privacy
  • Infographic : What Teens Share on Social Media
  • Focus group highlights : What teens said about social media, privacy, and online identity
  • July 26-Sept. 30, 2012 – Teens and Online Privacy

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

  • TeachableMoment

Can We Protect Our Privacy on Social Media?

There's a hidden cost to our free accounts on Facebook, Instagram, Snapchat, and other social media platforms: our privacy. In this lesson, students learn about and discuss how corporations make a profit from our data, potential policy solutions, and how young people are making their own decisions about online privacy.

  • social media

To the Teacher

Social media companies are tracking a tremendous amount of information about our activity online, and they are selling this information for profit. These companies have become huge businesses by offering advertisers and other interested parties data about the items we click on, the things we might like or dislike, and the opinions we express. We don’t have to pay to use social media services because we are the product being sold.

Occasionally, the extent to which corporations are profiting from data about users erupts into public scandal.  Uproar ensues when a company is hacked and personal information is stolen, or when political groups use detailed data to influence voters on social media platforms. But underneath such headline-grabbing incidents are broader issues about of privacy and what we can do to control our personal information online.

Thankfully, while debate continues in the public sphere about regulating social media companies, young people are actively thinking through questions about what information they want to put online and how they can control their digital presence in this age of over-exposure.

This lesson consists of two readings. The first reading looks at how corporations make a profit from our data, and it considers potential policy solutions to this problem. The second reading focuses on how young people are making their own decisions about online privacy. Questions for discussion follow each reading.

Note:  This lesson is Part 3 of a series of lessons on social media.

  • Part 1:  Does Social Media Make Us More or Less Connected ?
  • Part 2: Social Media and the Future of Democracy
  • Part 3: Can We Protect Our Privacy on Social Media?  

social media

Reading One: What Are They Doing with Your Data?

Although you can create an account on Facebook, Instagram, Snapchat, and other social media platforms without paying any money, there’s a hidden cost: the sacrifice of your privacy. Social media companies are tracking a tremendous amount of information about our activity online, and they are selling this information for profit. These companies have become huge businesses by offering advertisers and other interested parties data about the items we click on, the things we might like or dislike, and the opinions we express. In general, we don’t pay to use social media services because we are the product being sold.

What does it mean that we are the product? To get a better idea, we can look at the data-mining behaviors of a company like Facebook. In an April 2018 article for The New York Times, technology reporter Natasha Singer examined how Facebook uses our data. She reports that Facebook “meticulously scrutinizes” our online lives, and not just to show us targeted advertisements. The details that many of us regularly provide on Facebook, such as our age, employer, relationship status, likes and location, are just one part of the information that Facebook analyzes and uses. For example, she writes:

Facebook tracks both its users and nonusers on other sites and apps. It collects biometric facial data without users’ explicit “opt-in” consent. And the sifting of users can get quite personal. Among many possible target audiences, Facebook offers advertisers 1.5 million people “whose activity on Facebook suggests that they’re more likely to engage with/distribute liberal political content” and nearly seven million Facebook users who “prefer high-value goods in Mexico.” “Facebook can learn almost anything about you by using artificial intelligence to analyze your behavior,” said Peter Eckersley, the chief computer scientist for the Electronic Frontier Foundation, a digital rights nonprofit…. When internet users venture to other sites, Facebook can still monitor what they are doing with software like its ubiquitous “Like” and “Share” buttons, and something called Facebook Pixel — invisible code that’s dropped onto the other websites that allows that site and Facebook to track users’ activity…. “Facebook provides a network where the users, while getting free services most of them consider useful, are subject to a multitude of nontransparent analyses, profiling, and other mostly obscure algorithmical processing,” said Johannes Caspar, the data protection commissioner for Hamburg, Germany. https://www.nytimes.com/2018/04/11/technology/facebook-privacy-hearings.html?register=google

From time to time, this corporate data-mining erupts into scandal, like when a company is hacked and personal information is stolen. But sometimes scandal erupts from a legal use of social media data. This happened in 2018, when it was revealed that the firm Cambridge Analytica used Facebook data to construct detailed personality profiles of U.S. voters and target them with specific advertisements. In a March 2018 article for the Chicago Tribune, business reporter Ally Marotti described the uproar. She wrote:

Facebook CEO Mark Zuckerberg promised in a post Wednesday that the social media company would do more to protect its users’ data. “We have a responsibility to protect your data, and if we can't then we don't deserve to serve you,” he wrote. Zuckerberg’s post came following public outcry in response to a report last weekend from The New York Times and The Observer of London that Cambridge Analytica, a political data firm hired by the Trump campaign, gained access to private information of more than 50 million Facebook users, including their profiles, locations and what they like. The firm claimed its tools could analyze voters’ personalities and influence their behavior with targeted messages. Cambridge Analytica improperly acquired the information, Facebook has said, but it wasn’t stolen. Users allowed the maker of a personality quiz app to take the data. About 270,000 people took the quiz several years ago, the Times reported, and the app-maker was able to scrape data from their Facebook friends. He then provided the data to Cambridge Analytica…. Since the report last weekend, several American and British lawmakers have called for greater privacy protection and asked Zuckerberg to explain what the company knew about the misuse of its data…. The debate over internet privacy legislation in the U.S. has shifted from the federal to state level in recent years, but proponents argue there aren’t enough laws at either level to adequately protect users. [ https://www.chicagotribune.com/business/ct-biz-data-privacy-facebook-cambridge-analytica20180319-story.html ]

If corporations are using our social media data to sell us products, influence our decisions, and affect our votes, all with limited legal oversight, what can be done?

 In 2018, the European Union passed the General Data Protection Regulation (GDPR), one of the tougher online privacy laws in the world. In a May 2018 article for The New York Times, London-based technology correspondent Adam Satariano explained the measure:

The new law requires companies to be transparent about how your data is handled, and to get your permission before starting to use it. It raises the legal bar that businesses must clear to target ads based on personal information like your relationship status, job or education, or your use of websites and apps. That means online advertising in Europe could become broader, returning to styles more akin to magazines and television, where marketers have a less detailed sense of the audience. Some of the tools companies develop to comply with the GDPR might be made available to users whether they live in Europe or not. Facebook, for example, announced in April that it would offer the privacy controls required under the new law to all users, not just Europeans…. [Y]ou can ask companies what information they hold about you , and then request that it be deleted. This applies not just to tech companies, but also to banks, retailers, grocery stores or any other organization storing your information. You can even ask your employer. And if you suspect your information is being misused or collected unnecessarily, you can complain to your national data protection regulator, which must investigate . https://www.nytimes.com/2018/05/06/technology/gdpr-european-privacy-law.html?login=google  

So far in the U.S., companies can choose whether or not they want to abide by such European-style standards. Public interest advocates argue that this has left Americans exposed to abuses. As more people become aware of the negative effects of corporate data-mining, demands to change public policy domestically may well gain greater traction.

For Discussion  

  • How much of the material in this reading was new to you, and how much was already familiar? Do you have any questions about what you read?  
  • According to the reading, what kinds of information does a corporation like Facebook collect about its users? How does it make money from that data?  
  • Have you seen evidence in your own online experience that you are being tracked and that your data is being used, perhaps in ways you hadn’t anticipated?  
  • Do you think that the use of user data described in the reading is abusive, or simply something that people voluntarily opt into as a condition of using social media platforms? Explain your position.  
  • What is meant by the expression, “if you’re not paying, you are the product”? What do you think of this idea?  
  • According to the reading, what are some of the effects of the GDPR? What do you think of these requirements?  
  • What other changes would you like to see in privacy protections here in the United States?

Reading Two: How Are Young People Protecting Their Privacy?

Online privacy—or lack of it—can  have real-world impacts. Using social media puts us at risk of data-mining and surveillance, and it also leaves a permanent record that can be examined by employers, university admissions departments, family members, bullies, and political opponents. Such a prospect might give many users pause, even if they otherwise enjoy their lives online.

Thankfully, while debate continues in the public sphere about regulating social media companies, many young people are actively thinking through questions about what information they want to put online and how they can control their digital presence.

In a 2016 article for Vox, Irina Raicu, Internet Ethics Program Director at the Markkula Center for Applied Ethics at Santa Clara University, examined several studies of the privacy behaviors of young people and young adults. She summarized her findings:  

[P]eople between 13 and 35 do care about keeping some control over their information, and take measures to protect their privacy online, even as they sense that most such measures are imperfect solutions. It may surprise you to find out that 60 percent of the teens surveyed … “say they have created accounts that their parents were unaware of, such as on a social media site or for an app.” That is a privacy-protective measure: When it comes to privacy violations, the people teens are most worried about are their parents. As the report notes, “teens greatly value having some level of privacy from their parents when using the internet.” The older “young people” surveyed by Hargittai and Marwick report that they deploy a wide variety of privacy-protective measures: “Using different sites and apps for different purposes, configuring settings on social media sites, using pseudonyms in certain situations, switching between multiple accounts, turning on incognito options in their browsers, opting out of certain apps of sites, deleting cookies and even using Do-Not-Track browser plugins and password-management apps.”…. [A]   Pew Research study notes that “young adults generally are more focused than their elders when it comes to online privacy.” That study asked about some privacy-protective strategies, as well: Among the 18-to-29-year-olds surveyed, 74 percent said they had cleared cookies and browser histories, 71 percent had deleted or edited something they had posted, 49 percent had configured their browsers to reject cookies, 42 percent had decided not to use certain sites that demanded their real names, and 41 percent had used temporary user names or email addresses. In each of those categories, the younger users surpassed their elders. https://www.vox.com/2016/11/2/13390458/young-millennials-oversharing-security-digital-online-privacy

Some young people are taking even more drastic approaches, either aggressively self-censoring what they post or leaving social media behind entirely. Faced with bullying, mental health concerns, and the relentless pace of maintaining a social media presence, some are choosing to move relationships offline.

In a March 2019 article for Fast Company, Sonia Bokhari, an 8th grader who leads her middle school’s Gay-Straight Alliance and is a member of the school’s Environmental Club, gave an account of why she decided to dramatically curtail her social media use after feeling that her privacy had been violated. She wrote:

My parents had long ago made the rule that my siblings and I weren’t allowed to use social media until we turned 13, which was late, compared to many of my friends who started using Instagram, Wattpad, and Tumblr when we were 10 years old…. [S]everal months ago, when I turned 13, my mom gave me the green light and I joined Twitter and Facebook. The first place I went, of course, was my mom’s profiles. That’s when I realized that while this might have been the first time I was allowed on social media, it was far from the first time my photos and stories had appeared online. When I saw the pictures that she had been posting on Facebook for years, I felt utterly embarrassed, and deeply betrayed…. [My mom and my sister] were surprised when they heard how I felt, genuinely surprised. They didn’t know I would get so upset over it, because their intentions weren’t to embarrass me, but to keep a log and document what their little sister/youngest daughter was doing in her early childhood and young teenage years…. In the months since I discovered my unauthorized social media presence, I became more active on Facebook and Twitter. But it wasn’t until I’d been on social media for around nine months that I thought seriously about my digital footprint. Every October my school gave a series of presentations about our digital footprints and online safety. The presenters from an organization called OK2SAY, which educates and helps teenagers about being safe online, emphasized that we shouldn’t ever post anything negative about anyone or post unapproved inappropriate pictures, because it could very deeply affect our school lives and our future job opportunities…. While I hadn’t posted anything negative on my accounts, these conversations, along with what I had discovered posted about me online, motivated me to think more seriously about how my behavior online now could affect my future… I realized that being 13 and using social media wasn’t a fantastic idea, even though I wasn’t obsessed with it and was using it appropriately. My accounts now remain dormant and deactivated…. My friends are active social media users, but I think they are more cautious than they were before. They don’t share their locations or post their full names online, and they keep their accounts private. I think in general my generation has to be more mature and more responsible than our parents, or even teens and young adults in high school and college…. https://www.fastcompany.com/90315706/kids-parents-social-media-sharing

Just as young people have set the trends for which social media platforms rise and fall, they can also change the conversation about how we engage with these corporations and maintain control of our digital lives.

  • How much of the material in this reading was new to you, and how much was already familiar? Do you have any questions about what you read?
  • According to the reading, what are some decisions that young people are making to protect their privacy online?
  • Have you tried any of the strategies discussed in this reading? If so, how did they go for you?
  • Sonia Bokhari reports that she felt betrayed when she went online and saw information that her mother had posted about her when she was a kid. Have you ever experienced something like this? What sort of conversations do you think families and friends should have with one another to make sure they are respecting each other’s privacy?
  • Do you think more young people will choose to quit or substantially reduce their use of social media in the future? Or do you think social media usage will continue grow? What factors might affect the future role of social media in our lives?  

Research assistance provided by John Bergen.

Share this Page

Home — Essay Samples — Social Issues — Privacy — Privacy Issues Concerning Social Media

test_template

Privacy Issues Concerning Social Media

  • Categories: Privacy Social Media

About this sample

close

Words: 657 |

Published: Mar 19, 2020

Words: 657 | Page: 1 | 4 min read

Image of Dr. Oliver Johnson

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Karlyna PhD

Verified writer

  • Expert in: Social Issues Sociology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

7 pages / 3037 words

3 pages / 1531 words

1 pages / 618 words

1 pages / 642 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Privacy

Technology has become an integral part of our lives, impacting every aspect of our daily routines. It has greatly simplified tasks, allowing us to accomplish things more quickly and efficiently. Additionally, it has opened up [...]

My right to privacy at home, in my car, and within my emails is one of the most fundamental rights protecting about who I am as a person. One of the amendments that present this right is the Fourth Amendment to the US [...]

In today's digital age, social media platforms have become an integral part of our daily lives, shaping how we connect with others, consume information, and perceive the world around us. While the advent of social media has [...]

It is both the accomplishment and the progress of legal codes with the intentionality of preserving the country against assualts, illegal foreign invasions and terrorism. It greatly helps to maintain security in worldwide trade [...]

During your visit to this website, Right Claims may collect personal data about you, either directly (where you are asked to provide the data) or indirectly. Right Claims will, however, only use these personal data in [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay on does social media violate our privacy

  • Up to 1 Years Old
  • Up to 2 Years Old
  • Up to 3 Years Old
  • Up to 4 Years Old
  • Up to 5 Years Old
  • Up to 6 Years Old
  • Up to 7 Years Old
  • Up to 8 Years Old
  • Up to 9 Years Old
  • Up to 12 Years Old
  • 24 Volt Parallel
  • Lamborghini
  • Bluetooth Audio Connectivity
  • 3 & 5 Point Safety Harness
  • With Optional Leather Style Seat Upgrade
  • With Optional Soft EVA Wheel Upgrade
  • All Wheel Drive
  • The Most Expensive Ride On Toys at RiiRoo | £500+
  • Older Children
  • Upgrade to Leather Style Padded Seat
  • Upgrade to Soft EVA Wheels
  • Shop All Cars & Jeeps
  • Kids Electric Toys Up to 3 Years Old
  • Kids Electric Toys Up to 4 Years Old
  • Kids Electric Toys Up to 5 Years Old
  • Kids Electric Toys Up to 6 Years Old
  • Kids Electric Toys Up to 7 Years Old
  • Kids Electric Toys Up to 8 Years Old
  • Kids Electric Toys Up to 9 Years Old
  • Kids Electric Toys Up to 12 Years Old
  • Kids Electric Toys Up to 15+ Years Old
  • Electric Motorbikes
  • Petrol Motorbikes
  • Shop All Motorbikes
  • Electric Quads & ATV's
  • Petrol Quads & ATV's
  • Shop All Quads & ATV's
  • Electric Scooters
  • Push Scooters
  • Shop All Scooters
  • Drift Karts
  • Petrol Go Karts
  • Pedal Go Karts
  • Shop All Go Karts
  • Push Along's
  • High Powered Electric
  • High Powered Petrol
  • Shop All High Powered
  • Control Board Modules
  • Media Players
  • Parental Remotes
  • Pedal Switches
  • Shop All Spares
  • Showroom Range
  • Accessories
  • Kids Ride on Trains
  • Pink Electric Ride on Cars
  • Kids Diggers
  • Pre-Assembled
  • Shop All Others
  • RiiRoo Coupon and Discount Codes
  • Ride On Buying Guides
  • Customer Reviews
  • Latest Products
  • Assembly Video Instructions
  • Troubleshooting Guides
  • Help Centre
  • Lifestyle Blog
  • Health and Wellbeing Blog

What is The Impact of Social Media on Privacy?

As social media has become more popular, people have become increasingly worried about the privacy implications of using these platforms.

a kid doing some programming on computer

Additionally, many people are concerned about the way that social media companies collect and use data about their users.

These concerns have led to calls for tighter regulation of social media companies and increased awareness of the importance of protecting one’s privacy online. However, it remains to be seen how effective such measures will be in protecting users’ privacy.

1. Social media has impacted privacy by enabling people to share personal information

Social media has impacted privacy by enabling people to share personal information with a wider audience than ever before:

Social media platforms like Twitter allow users to make posts available for public viewing, meaning that they can reach a much larger audience than before (when such platforms did not exist).

Additionally, many people associate themselves with their online identity on these platforms and believe that it is okay to share their personal thoughts and information with a wide audience.

2. Social media has led to more breaches of privacy

Recent events have confirmed everyone’s worst fears about the dangers of posting too much personal information online on social media platforms:

For example, many people were extremely concerned when it was revealed that Cambridge Analytica had been using Facebook data of 50 million users to help them with their political advertising during Donald Trump’s presidential campaign in 2016.

In another instance, Equifax, one of the US' major credit rating agencies announced that hackers had stolen sensitive financial data from over 140 million customers (which would potentially compromise both individual and national security).

This is a problem because once a user shares information online, they lose control over it - it means that it might be used for purposes which they do not approve, or in a way that they don’t expect.

3. Social media has impacted privacy by enabling the collection of private data

Social media platforms are able to collect data about their users via their website logs, search engines, cookies, third-party apps and other sources.

This enables websites use this information for targeted advertising or even to sell it on to third parties without the user’s knowledge:

Although many social media platforms have claimed that they use this information ethically, there is still cause for concern.

For example, Facebook was recently forced to admit that it had been collecting Android users’ call records and text message history since 2017  without letting them know.

Although Facebook claims that they have done this in order to improve their messaging service, the fact remains that they were using people’s personal data for financial gain without permission.

4. The impact on privacy is not known, because it's hard to regulate social media companies

Even though there are now calls for greater regulation of the social media industry, it is far from certain that this will actually be effective in protecting people’s privacy.

Although the European Union has taken steps to protect internet users by enforcing strict data protection laws, even these measures have not been enough to stop Facebook and Google from allegedly using shady practices such as tracking users across websites and storing cookies on them even when they have no accounts (which violates basic principles of online marketing).

This suggests that tighter regulations may not solve the problem of personal information being used without consent.

5. Users should be aware of the risks and take steps to protect their own privacy online

Despite the challenges that lie ahead when it comes to protecting user privacy on social media, it is important for users to be aware of the risks and take steps in order to protect themselves.

Many people are now deleting their Facebook accounts or at least considering deleting them after they have become more conscious about where their data is going and what it being used for.

Additionally, users should not share private information via social media platforms unless absolutely necessary (i.e., keep financial details like credit card numbers and expiry dates private) as this will reduce the risk of cyber theft.

8. There are ways that users can minimize the risk of their private information being exposed online

  • Avoid sharing any personal information online
  • Delete social media accounts that do not add much value to their lives
  • Only sign in and use social media platforms via the most secure internet connections possible (e.g., using a VPN)
  • Be aware of when companies are tracking them online and avoid letting their browser save cookies on their devices

Even though there are ways for users to protect their own privacy, there are also other ways in which people's sensitive information can be leaked online, such as when they open attachments or click on links that they should not have.

This is another reason why users should be careful about what they open/click on and avoid doing this over an unsecured internet connection.

Although social media platforms like Facebook and Twitter give users the option to set their profiles to private mode, these measures offer limited protection against people finding out information about you.

It is difficult for anyone who wants to protect themselves from sharing too much personal information online because it requires constant vigilance.

Social media sites are designed specifically to get users to share things willingly without thinking about what the consequences may be.

The question of whether social media impacts privacy remains unclear because different studies have come up with very different results when investigating this issue

Some studies have shown that social media impacts privacy because people are more likely to share information online than they would otherwise (the mere-exposure effect).

On the other hand, other studies have suggested that sharing information on social media may actually increase privacy because it allows users to selectively reveal only certain things about themselves, ie what they want others to know.

As of yet, it is difficult to reach a definitive conclusion as whether or not social media impacts privacy because different studies have come up with very different results.

Research has found that some people are reluctant to share personal details unless they are certain their posts can be kept private, while others have shown that users regularly disclose sensitive information even when they know it isn't private.

Even though this contradiction suggests that social media does impact privacy in some way, the specific nature of the relationship between these two factors remains unclear.

Wrapping Up:

Although social media can have a negative impact on privacy because it allows users to share information they should not be revealing, the relationship between sharing-information and reduced-privacy is complex and different studies have come up with very different results when investigating this issue.

Social media platforms have given rise to many benefits such as connecting with friends and family members who live far away, but they have also enabled websites to collect far more data than was previously possible.

This unfortunately means that people’s personal information is often exploited for financial gain or even stolen by hackers without them knowing about it.

Although tighter regulations may be enforced in the future to ensure that data is better protected from those who do not have permission access to it, social media will always carry some element of risk as long as people continue to share things about themselves online without thinking carefully first.

Added to your cart:

Additional items are displayed in the cart and checkout.

Compare products

{"one"=>"Select 2 or 3 items to compare", "other"=>"{{ count }} of 3 items selected"}

Select first item to compare

Select second item to compare

Select third item to compare

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Research on the influence mechanism of privacy invasion experiences with privacy protection intentions in social media contexts: Regulatory focus as the moderator

1 School of Journalism and Communication, Xiamen University, Xiamen, China

2 Research Center for Intelligent Society and Social Governance, Interdisciplinary Research Institute, Zhejiang Lab, Hangzhou, China

Associated Data

The original contributions presented in the study are included in the article/ Supplementary material , further inquiries can be directed to the corresponding author.

Introduction

In recent years, there have been numerous online privacy violation incidents caused by the leakage of personal information of social media users, yet there seems to be a tendency for users to burn out when it comes to privacy protection, which leads to more privacy invasions and forms a vicious circle. Few studies have examined the impact of social media users' privacy invasion experiences on their privacy protection intention. Protection motivation theory has often been applied to privacy protection research. However, it has been suggested that the theory could be improved by introducing individual emotional factors, and empirical research in this area is lacking.

To fill these gaps, the current study constructs a moderated chain mediation model based on protection motivation theory and regulatory focus theory, and introduces privacy fatigue as an emotional variable.

Results and discussion

An analysis of a sample of 4800 from China finds that: (1) Social media users' previous privacy invasion experiences can increase their privacy protection intention. This process is mediated by response costs and privacy fatigue. (2) Privacy fatigue plays a masking effect, i.e., increased privacy invasion experiences and response costs will raise individuals' privacy fatigue, and the feeling of privacy fatigue significantly reduces individuals' willingness to protect their privacy. (3) Promotion-focus individuals are less likely to experience privacy fatigue than those with prevention-focus. In summary, this trend of “lie flat” on social media users' privacy protection is caused by the key factor of “privacy fatigue”, and the psychological trait of regulatory focus can be used to interfere with the development of privacy fatigue. This study extends the scope of research on privacy protection and regulatory focus theory, refines the theory of protection motivation, and expands the empirical study of privacy fatigue; the findings also inform the practical governance of social network privacy.

1. Introduction

Nowadays, people communicate and share information through SNS, and it has become an integral part of the daily lives of network users worldwide (Hsu et al., 2013 ). SNS makes people's lives highly convenient. However, it also poses an increasingly serious privacy issue. For instance, British media reported that 87,000,000 Facebook users' profiles were illegally leaked to a political consulting firm, Cambridge Analytica (Revell, 2019 ). In addition, one of the three major US credit bureaus, Equifax, reported a large-scale data leak in 2017, including 146 million pieces of personal information (Zhou and Schaub, 2018 ). The incidents that happened in recent years provoked a wave of discussion on personal privacy and information security issues.

Individuals' proactive behavior in protecting online privacy information is an effective method for reducing the occurrence of privacy violations; therefore, scholars explored how to enhance individuals' willingness to protect privacy. In terms of applied theoretical models, the Health Belief Model (HBM) (Kisekka and Giboney, 2018 ), the Technology Threat Avoidance Theory (TTAT) (McLeod and Dolezel, 2022 ), the Technology Acceptance Model (TAM) (Baby and Kannammal, 2020 ), and the Theory of Planned Behavior (TPB) (Xu et al., 2013 ) have been applied to explore the issue of online privacy protection behavior. By contrast, Protection Motivation Theory (PMT) is more applicable to studying privacy protection behavior in SNS because it focuses on threat assessment and coping mechanisms for privacy issues. However, the issue with this study's application of PMT theory is that it ignores the influence of individual emotions on protective behavior (Mousavi et al., 2020 ). Therefore, this study considered privacy fatigue as a variable to expand the theory of PMT in the context of social media privacy protection research. Moreover, in terms of the antecedents of privacy protection, existing research suggests that factors such as perceived benefits, perceived risks (Price et al., 2005 ), privacy concerns (Youn and Kim, 2019 ), self-efficacy (Baruh et al., 2017 ), and trust (Wang et al., 2017 ) can affect individuals' privacy-protective behaviors.

Along with the increased frequency of data breaches on the Internet, people find that they have less control over their data. Further, they are overwhelmed by having to protect their privacy alone. Moreover, the complexity of the measures required to protect personal information aggravates users' sense of futility, leading to exhaustion among online users. This phenomenon, defined as “privacy fatigue,” is regarded as a factor leading to the avoidance of privacy issues. Privacy fatigue has recently been prevalent among network users. However, empirical studies related to this phenomenon are still insufficient (Choi et al., 2018 ). Therefore, this study attempted to explore the role privacy burnout plays in users' privacy protection behaviors. Previous studies discovered that the impact of varying degrees of privacy invasion on privacy protection differed according to individual differences. It could be moderated by psychological differences (Lai and Hui, 2006 ). Clarifying the role of psychological traits is beneficial to the hierarchical governance of privacy protection. Regulatory focus is a kind of psychological trait based on different regulatory orientations, which could effectively affect social media users' behavioral preferences and decisions on privacy protection (Cho et al., 2019 ); however, to date, the relationship between regulatory focus, privacy fatigue, and privacy protection intentions has not been sufficiently examined. For this reason, it is necessary to empirically explore this question.

Based on the PMT theoretical framework, this study built a moderated mediation model to examine the influential mechanism of privacy-invasive experiences on privacy protection intentions by introducing three factors: response costs, privacy burnout, and regulatory focus. Data analyzed from an online survey of 4,800 network users demonstrated that, first, social media users' experiences of privacy invasion increase their willingness to protect privacy. Second, privacy burnout has a masking effect, which means that the more privacy-invasive experiences and response costs there are, the greater the privacy fatigue, which reduces users' privacy protection intentions even further. Third, promotion-focused individuals are less likely to experience fatigue from protecting personal information alone. The significance of this study lies in the fact that it bridged the gap between the effects of privacy violation experiences on individuals' protective willingness.

Meanwhile, this study verified the practicality of combining PMT theory with emotionally related variables. Additionally, it complemented the study on privacy fatigue and expanded the scope of regulatory orientation theory in privacy research. From a practical perspective, this study offered a reference for the hierarchical governance of privacy in social networks. Finally, this study reveals a vicious cycle mechanism (negative experiences, privacy fatigue, low willingness to protect, and new negative experiences) followed by a theoretical reference for breaking this cycle.

2. Theoretical framework

2.1. privacy invasion experiences, response costs, and privacy protection intentions.

Protection motivation theory (PMT) is commonly used in online privacy studies (Chen et al., 2015 ). According to Rogers ( 1975 ), individuals cognitively evaluate the risk before adopting behaviors, develop protection motivation, and eventually modify their behaviors to avoid risks. There are two sources of impact on people's response assessments: environmental and interpersonal sources of information and prior experience. After combing through the past literature, we found that many scholars have verified the influence of environmental (Wu et al., 2019 ) and interpersonal (Hsu et al., 2013 ) factors on individual privacy protection; however, only a few scholars explored the effect of privacy violation experiences on privacy protection intentions. Some studies proved that individuals' prior privacy violation experiences are an antecedent to their information privacy concerns, including in the mobile context and at the online marketplace (Pavlou and Gefen, 2005 ; Belanger and Crossler, 2019 ). Regarding privacy concerns, prior studies widely demonstrated a significant antecedent to privacy protection intentions and protective behaviors. In addition, a meta-analysis found that users who worried about privacy were less likely to use internet services and were more likely to adopt privacy-protective actions (Baruh et al., 2017 ).

People make sense of the world based on their prior experiences (Floyd et al., 2000 ), while network users who have had privacy-invasive experiences tend to believe that the privacy risks are closely related to themselves (Li, 2008 ). They tend to be more aware of the seriousness and vulnerability of privacy issues (Mohamed and Ahmad, 2012 ). The effects of previous negative experiences on perceived vulnerability can also be explained by the availability heuristic, which assumes that the easier it is to retrieve experienced cases from memory, the higher the perceived frequency of the event. In contrast, when fewer cases are retrieved, people may estimate that the event is less likely to occur than in objective situations. Therefore, people's accumulated experiences of negative events might influence their perception of future vulnerability to risk (Tversky and Kahneman, 1974 ). However, in accordance with PMT, seriousness and vulnerability affect protective behavior in the context of social media privacy issues. Therefore, we can assume that the more memories of privacy violations people have, the more likely they are to believe that their privacy will be violated by privacy exposure, thereby increasing their motivation to protect privacy that is, their willingness to protect privacy. Therefore, this study proposed the following hypothesis:

  • H1: Privacy invasion experience is positively affecting protective privacy willingness.

PMT suggests that cognitive evaluation consists of assessing response costs (Rogers, 1975 ), and response costs refer to any costs, such as monetary, time, and effort (Floyd et al., 2000 ). According to findings from a health psychology study, when faced with the threat of skin cancer, people prefer to use sunscreen rather than avoid the sun (Jones and Leary, 1994 ; Wichstrom, 1994 ). It may be because of the lower response costs of utilizing sunscreen. These findings inspire us to believe that individuals calculate the response cost before they take protective actions. Privacy protection-related studies also indicate that prior experiences with personal information violations may significantly increase consumers' privacy concerns about both offline and online privacy and that privacy concerns are related to perceived risks (Okazaki et al., 2009 ; Bansal et al., 2010 ). It has also been shown that individuals who have experienced privacy invasion perceive a greater severity of risk (Petronio, 2002 ). Considering individuals' perceptions of risks affects their assessment of costs, which is part of the game between risks and benefits. In other words, a stronger risk perception indicates that higher response costs should be paid. Thus, this study assumed that people with more privacy violation experiences might perceive higher response costs and tend to take protective actions to avoid paying more. Consequently, this study made the following hypothesis:

  • H2a: A higher level of privacy-invasive experiences results in a higher perception of response costs.
  • H2b: A higher level of perception of response costs will result in higher privacy protection intentions.
  • H2c: Response cost mediates the effect of privacy-invasive experiences on privacy protection intentions.

2.2. Privacy invasion experiences, response costs, and privacy protection intentions

The medical community first introduced the concept of fatigue and referred to it as a subjective unpleasant feeling of tiredness (Piper et al., 1987 ). The concept of fatigue has been used in many research fields, such as clinical medicine (Mao et al., 2018 ), psychology, and more (Ong et al., 2006 ). In recent years, scholars also used the concept of “fatigue” in the study of social media and regarded it as an important antecedent to individual behaviors (Ravindran et al., 2014 ). Choi et al. ( 2018 ) defined “privacy fatigue” as a psychological state of fatigue caused by privacy issues. Specifically, “privacy fatigue” manifests itself as an unwillingness to actively manage and protect one's personal information and privacy (Hargittai and Marwick, 2016 ).

With the increasing severity of social network and personal information issues, the research around privacy fatigue, especially the examination of the antecedents and effects of privacy fatigue, has been widely developed. Regarding antecedents, scholars found that privacy concerns, self-disclosure, learning about privacy statements and information security, and the complexity of privacy protection practices could influence individuals' levels of privacy fatigue (Dhir et al., 2019 ; Oh et al., 2019 ). In terms of the effects, privacy fatigue can not only cause people to reduce the frequency of using social media or even withdraw from the Internet (Ravindran et al., 2014 ), but it can also motivate individuals to resist disclosing personal information (Keith et al., 2014 ); however, only a few studies examined privacy invasion experiences, privacy fatigue, and privacy protection intentions under one theoretical framework.

Furnell and Thomson ( 2009 ) pointed out that “privacy fatigue” is triggered by an individual's experience with privacy problems. Additionally, privacy fatigue has a boundary. When this boundary is crossed, social network users become bored with privacy management, leading them to abandon social network services. It has also been suggested that privacy data breaches can cause individuals to feel “disappointed.” In a study of medical data protection, the results showed that breaches of patients' medical data can have a cumulative effect on patients' behavioral decisions by causing them to perceive that their requests for privacy protection are being ignored (Juhee and Eric, 2018 ). The relationship between privacy invasion experiences and privacy fatigue has been widely demonstrated. Such social media characteristics as internet privacy threat experience and privacy invasion could lead to users' sense of emotional exhaustion and privacy cynicism, which was further associated with social media privacy fatigue (Xiao and Mou, 2019 ; Sheng et al., 2022 ). In terms of the outcomes, some other studies focusing on the privacy paradox found that emotional exhaustion and powerlessness (the same concept as exhaustion) would weaken the positive influence relationship between privacy concerns and their willingness to protect personal information (Tian et al., 2022 ). On account of the above reviews, it is reasonable to analogize that an individual's privacy invasion experience in the context of social media use can exacerbate an individual's perception of privacy fatigue. In other words, considering the social media privacy context, privacy fatigue may lead network users to abandon privacy protection behaviors and create opportunities for privacy invasion. Based on the above discussions, we proposed the following hypotheses:

  • H3a: Privacy invasion experiences positively affect privacy fatigue.
  • H3b: Privacy fatigue negatively affects privacy protection intentions.
  • H3c: Privacy fatigue has a masking (a form of mediating effect) role in the effects of individual social media privacy invasion experiences on privacy protection intentions.

As discussed above, we hypothesized that both response costs and privacy fatigue mediate the effect of social media users' privacy invasion experiences on their privacy protection intentions. Assuming that both response costs and privacy fatigue could mediate the effect of social media users' privacy invasion experiences on their privacy protection intentions, what is the association between response costs and privacy fatigue? It has been argued that a common shortcoming of current research applying PMT theory is that it ignores the role emotions play in this mechanism (Mousavi et al., 2020 ). This view is supported by Li's research, which argues that most research on privacy topics is conducted from a risk assessment perspective and tends to ignore the impact of emotions on privacy protection behaviors (Li et al., 2016 ). It was believed that emotions could change an individual's attention and beliefs (Friestad and Thorson, 1985 ). These factors are both related to behavioral intentions.

It has also been suggested that emotions play a mediating role in the process of behavioral decision-making (Tanner et al., 1991 ). However, only a few studies explored this influential mechanism to date. Zhang et al. ( 2022 ) found a positive influence between response costs and privacy fatigue. They conducted the research based on the Stressor-Strain-Outcome (S-S-O) framework to explore which factors (stressors) could cause privacy fatigue intentions (strain) and related behaviors (outcome). The results discovered that time cost and several other stressors significantly positively impact social media fatigue intention. As quoted from Floyd et al. ( 2000 ), “response costs” refer to any costs in which time costs were included. Despite an important reference to the above study's results provided for this study, the time cost is just one factor among response costs. This piece of research will focus on general response costs, assisting in a better understanding of this influential mechanism. Based on this, we proposed the following hypotheses:

  • H4a: Privacy response costs are positively associated with privacy fatigue.
  • H4b: Response costs and privacy fatigue play chain mediating roles in the effect of privacy invasion experiences on privacy protection intentions.

2.3. Regulatory focus as the moderator

Differences in individual psychological traits can lead to significant differences in individuals' cognition and behaviors (Benbasat and Dexter, 1982 ), and it has been shown that personal psychological traits can influence individuals' perceptions of fatigue (Dhir et al., 2019 ). A recent study also found that neuroticism has positive effects on privacy fatigue but that traits like agreeableness and extraversion have negative effects (Tang et al., 2021 ). However, previous research on social media privacy fatigue is relatively limited. Given the critical nature of privacy fatigue in research models, it is necessary to explore the differences in perceived fatigue among individuals with different psychological traits. This study introduced individual levels of regulatory focus as a moderator and examined the effect of privacy invasion experiences on privacy fatigue. Regulatory focus as a psychological trait was applied to explain social media users' privacy management and privacy protection problems (Wirtz and Lwin, 2009 ; Li et al., 2019 ).

Regulatory Focus Theory (RFT) classifies individuals into two different levels based on psychological traits: promotion focus, which focuses more on benefits and ignores potential risks, and prevention focus, which tends to avoid risks and ignore benefits when making decisions (Higgins, 1997 ). Research demonstrated that perceptions of benefits are supposed to reduce fatigue, while perceptions of risk could exacerbate fatigue (Boksem and Tops, 2008 ). By the same analogy, promotion-focused individuals are more inclined to notice the benefits of using social media (Jin, 2012 ) and thus may experience less fatigue and lower response costs when experiencing privacy violations; in contrast, individuals with a prevention focus are more aware of the risks associated with privacy invasion and thus have more concerns about privacy issues, which can lead to more feelings of fatigue and higher perceived response costs about privacy issues. Combined with H4, we can reason that the path of influence of social media privacy invasion experiences on privacy protection intentions may be affected by the level of individual regulatory focus. The effect of privacy invasion experiences on privacy fatigue and response costs was stronger for individuals who tended to be prevention focused than for those who tended to be promotion focused. Therefore, the mediating effect of privacy fatigue and response cost is stronger. In summary, this study proposed the hypotheses as follows:

  • H5a: Compared to promotion-focused users, the effect of privacy invasion experiences on privacy fatigue is greater for prevention-focused users.
  • H5b: Compared to prevention-focused users, the effect of privacy invasion experiences on response costs is greater for promotion-focused users.

2.4. Current study

In summary, the current study concluded that, in the social media context, users' experiences of privacy invasion would increase their perception of response costs and thus result in privacy fatigue. Privacy fatigue decreases individuals' privacy protection intentions. However, this process differed for individuals with different regulatory focuses. In detail, individuals with a promotion focus are less likely to experience privacy fatigue than individuals with a prevention focus. Based on the above logic, the conceptual model constructed in this study is shown in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1031592-g0001.jpg

Conceptual model.

3. Materials and methods

3.1. participants and procedures.

This survey was conducted in December 2021, and Zhejiang Lab collected the data. The questionnaire was pretested with a small group of participants to ensure the questions were clearly phrased. Participants were informed of their right to withdraw and were assured of confidentiality and anonymity before participating in this research survey. Computers, tablets, and mobile phones were all used to complete the cross-sectional survey. After giving their consent, participants were asked to complete the following scales. After the screening, 4,800 valid questionnaires were selected. The invalid questionnaires were deleted mainly based on not passing the test of the screening questions rather than not answering the questions carefully (e.g., the answers to the questions of several consecutive variables are the same, or the number of repeated options is >70%).

To guarantee data quality and reduce possible interference from gender and geographical factors, this survey used a quota sampling method, as shown in Table 1 , with a sample gender ratio of 1:1 and samples from 16 cities in China, with 300 valid samples in each city. Considering the possible relationship between the privacy invasion experience and the years of Internet usage, participants' previous privacy invasion experience is meaningful to this study, and the final sample had 34.5 and 57.3% of Internet usage between 5 and 10 years and more than 10 years, respectively, which met the requirements of the study. In terms of education level, college and bachelor's degrees accounted for the largest proportion, at 62.0%, followed by high school/junior high school and vocational high school, at 27.3%. In terms of the age of the sample, the ratio of those younger than 46 years old to those above was 59.7:40.3 with a balanced distribution among all age groups. The basic demographic variables are tabulated as shown in Table 1 .

Statistical table of basic information on effective samples.

3.2. Measurements

Based on the model and hypotheses of this study, the instruments of this study included measures of privacy invasion experiences, response costs, privacy fatigue, privacy protection intentions, and regulatory focus (including promotion focus and prevention focus). This study's questionnaire was designed on scales that have been pre-validated. All scales were adapted based on social media contexts, and all responses were graded on a Likert scale ranging from 0 (strongly disagree) to 6 (strongly agree). A higher score was a better fit for that measure. Sub-items within each scale were averaged, resulting in composite scores.

The privacy invasion experiences scale was referenced from Su's study (Su et al., 2018 ). The scale is a 3-item self-reported scale (e.g., “My personal information, such as my phone number, shopping history, and more, is used to be shared by intelligent media with third-party platforms.”). The response cost scale was developed from the scale in the study by Yoon et al. ( 2012 ), which included three measurement questions (e.g., “When personal information security is at risk on social media, I consider that taking practical action will take too much time and effort.”). The privacy fatigue scale was derived from a related study by Choi et al. ( 2018 ), and the current study applied this 4-item scale to measure privacy fatigue on social media (e.g., “Dealing with personal information protection issues on social media makes me tired.”). The privacy protection intention scale was based on the scale developed by Liang and Xue ( 2010 ), which contains three measurement items (e.g., “When my personal information security is threatened on social media, I am willing to make efforts to protect it.”). The regulatory focus scale was derived from the original scale developed by Higgins ( 2002 ) and later adapted by Chinese scholars for use with Chinese samples (Cui et al., 2014 ). The scale contains six items on measures for promotion focus (e.g., “For what I want to do, I can do it all well”) and four items on measures for prevention focus (e.g., “While growing up, I often did things that my parents didn't agree were right”). The regulatory focus was measured by subtracting the average prevention score from the average promotion score, with higher differences indicating a greater tendency toward promotion focus and lower differences indicating a greater tendency toward prevention focus (Cui et al., 2014 ).

3.3. Data analysis

The validity and reliability of our questionnaire were tested using Mplus8. The PROCESS macro for SPSS was used to evaluate the moderated chain mediation model with the bootstrapping method (95 percent CI, 5,000 samples). Gender (1 = men, 0 = women), age, the highest degree obtained, and Internet lifetime are among the covariates examined in this model.

4.1. Measurement of the model

As shown in Table 2 , privacy invasion experiences, response costs, privacy fatigue, and privacy protection intentions are all factors to consider. Cronbach's α and composite reliability of scales are higher than the acceptable value (>0.70). Although the Cronbach's α for promotion and prevention focus were slightly <0.70, they were >0.60 and close to 0.70, which was also considered permissible due to the large sample size of this study, and the reliability test of the measurement model in this study was qualified (Hair et al., 2019 ).

Results of the validity and reliability.

PIE, privacy invasion experiences; PC, response costs; PF, privacy fatigue; PPI, privacy protection intentions. Bold value is the square root of AVE.

Since the measurement instruments in this study were derived from validated scales, the average variance extracted (AVE) was higher than 0.5, but we can accept 0.4. According to Fornell and Larcker ( 1981 ), if the AVE is <0.5, but the composite reliability is higher than 0.6, the construct's convergent validity is still acceptable (Fornell and Larcker, 1981 ). Further, Lam ( 2012 ) also explained and confirmed this view (Lam, 2012 ). Discriminant validity was tested by comparing the square root of AVE with the correlations of the researched variables. The square root of the AVE was higher than the correlation, indicating good discriminant validity.

Then, we tested the goodness of fit indices. Confirmatory factor analysis (CFA) of our questionnaire produced acceptable fit values for the one-dimensional factor structure (RMSEA = 0.048 0.15, SRMR = 0.042 0.05, GFI = 0.955 > 0.9, CFI = 0.947 > 0.9, NFI = 0.943 > 0.9, and 948 = 0.945 > 0.9) after introducing the error covariances in the model. In summary, the current study passed the reliability and validity tests.

4.2. Descriptive statistics

Table 3 shows the descriptive statistics and correlation analysis results. Response costs, privacy fatigue, and privacy protection intentions were all positively correlated with privacy invasion experiences. Privacy fatigue and privacy protection intentions were both positively correlated with response costs. Private fatigue was found to be negatively related to privacy protection intentions.

Means, standard deviations, and correlations among research variables.

PIE, privacy invasion experiences; PC, response costs; PF, privacy fatigue; PPI, privacy protection intentions; RF, regulatory focus; ** p < 0.01.

4.3. Relationship between privacy invasion experience and privacy protection intentions

Table 4 shows the results of the polynomial regression analysis. Privacy invasion experiences significantly influenced levels of response costs (β = 0.466, SE = 0.023, t = 11.936, p = 0.000), privacy fatigue (β = 0.297, SE = 0.022, t = 13.722, p = 0.000), and privacy protection intentions (β = 0.133, SE = 0.011, t = 12.382, p = 0.000) after controlling for gender, highest degree obtained, age, and Internet lifetime. Response costs positively predicted privacy fatigue (β = 0.382, SE = 0.013, t = 29.793, p = 0.000) and privacy protection intention (β = 0.098, SE = 0.010, t = 9.495, p = 0.000). However, privacy fatigue was significantly negatively correlated with privacy protection intentions (β = −0.130, SE = 0.011, t = −12.303, p = 0.000) in this model. In conclusion, H1, H2a, H2b, H3a, H3b, and H4a were supported.

Multiple regression results of the moderated mediation model.

PIE, privacy invasion experiences; PC, response costs; PF, privacy fatigue; PPI, privacy protection intentions; RF, regulatory focus; * p < 0.05; ** p < 0.01; *** p < 0.001; β, unstandardized regression weight; SE, standard error for the unstandardized regression weight; t, t-test statistic; F, F-test statistic.

Then, we used Model 6 of PROCESS to test the mediating effect in our model. As the results in Table 5 , H2c, H3c, and H4b were accepted.

Results of mediating effect test.

PIE, privacy invasion experiences; PC, response costs; PF, privacy fatigue; PPI, privacy protection intentions.

Model 84 in the SPSS PROCESS macro is applied to carry out the bootstrapping test to examine the moderation effect of regulatory focus. Privacy invasion experiences, response costs, privacy fatigue, and regulatory focus were centralized before constructing the interaction term. The results showed that regulatory focus significantly moderated the effect of privacy invasion experiences on privacy fatigue [95% Boot CI = (0.002, 0.006), and H5a was supported. In addition, the mediating effect was significant at a low level of regulatory focus (−1 SD; Effec t = −0.038; 95% Boot CI = (−0.046, −0.030)], medium level of regulatory focus [Effec t = −0.032; 95% Boot CI = (−0.039, −0.026)] and high level of regulatory focus [+1 SD; Effec t = −0.026; 95% Boot CI = (−0.032, 0.020)]. Specifically, the mediating effect of privacy fatigue decreased as individuals increasingly tended to be promotion focused. However, the regulatory focus did not significantly moderate the effect of privacy invasion experiences on response costs [95% Boot CI = (−0.001, 0.003)], and H5b was rejected.

Meanwhile, privacy invasion experiences × regulatory focus interaction significantly predicted privacy fatigue (β = −0.046, SE = 0.008, t = −3.694, p = 0.000; see Figure 2 ). The influence of privacy invasion experiences on privacy fatigue was significant when the level of regulatory focus was high (β = 0.385, SE = 0.016, t = 23.981, p = 0.000), medium (β = 0.430, SE = 0.015, t = 29.415, p = 0.000), and low (β = 0.475, SE = 0.022, t = 22.061, p = 0.000). Specifically, the more the individuals tended to be promotion focused (high regulatory focus scores), the less the level of fatigue caused by privacy invasion, and the more the individuals tended to be prevention focused (low regulatory focus scores), the more the level of fatigue was caused by privacy invasion.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1031592-g0002.jpg

Simple slope test of the interaction between PIE and RF on the PF.

5. Discussion

The purpose of the present study was to explore the relationship among privacy invasion experiences, response costs, privacy fatigue, privacy protection intentions, and regulatory focus. This study showed that response costs and privacy fatigue play mediating roles, whereas regulatory focus plays a moderating role in this process (as shown in Figure 3 ). These findings help clarify how and under which circumstances social media users' privacy invasion experiences affect their privacy protection intentions, thereby providing a means to improve people's privacy situation on social media platforms.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-1031592-g0003.jpg

The moderated chain mediation model. Dashed lines represent nonsignificant relations *** p < 0.001.

5.1. A chain mediation of response costs and privacy fatigue

The current study found that social media users' privacy invasion experiences have a significant positive effect on their response costs, and the increase in response costs will in turn increase individuals' privacy protection intentions. This finding was consistent with previous literature on health psychology, which found that individuals calculate response costs for different actions before making decisions. The higher the response costs individuals perceive, the greater the possibility that they will improve their protective intention (Jones and Leary, 1994 ; Wichstrom, 1994 ). Compared with users who experienced less privacy invasion on social media, people who experienced more privacy violations would perceive a higher level of response costs, which would further increase their protective intention to avoid dealing with the negative outcomes followed by privacy invasion.

The study also found that social media users' privacy invasion experiences had a significant positive effect on privacy fatigue, which is consistent with prior research on social media use (Xiao and Mou, 2019 ; Sheng et al., 2022 ). At the same time, response costs also positively affected privacy fatigue, and research on social media fatigue behaviors indicated this influential mechanism in the past (Zhang et al., 2022 ). However, this study additionally found that response costs partially mediated the effect of privacy invasion experiences on privacy fatigue. Although both increased privacy invasion experiences and increased response costs will improve social media users' privacy protection intentions, privacy fatigue can mask this process, i.e., increased privacy fatigue reduces individuals' privacy protection intentions.

Moreover, this study revealed that response costs and privacy fatigue play chain-mediated roles in the effect of social media privacy invasion experiences on privacy protection intentions and further explained the mechanism. In addition, the masking effect of privacy fatigue also explains why privacy invasion experiences do not have a strong effect on privacy protection intentions. In other words, this privacy fatigue is an important reason that people currently “lie flat” (adopt passive protection) in the face of privacy-invasive issues online.

5.2. Regulatory focus as moderator

The relationship between social media privacy invasion experiences and privacy fatigue was moderated by regulatory focus. To be more specific, the more the people who promoted their privacy, the less the level of privacy fatigue they felt; the more the people who prevented their privacy, the more the level of privacy fatigue they felt. In other words, promotion focus acts as a buffer in this process. In other words, promotion focus has a buffering effect in this process. To some extent, the result of this study verified that different regulated individuals would sense different levels of fatigue due to their pursuing benefits or avoiding risks when they make decisions (Boksem and Tops, 2008 ; Jin, 2012 ). On the other hand, the regulatory focus did not moderate the relationship between privacy invasion experiences and response costs. One possible explanation is that, compared with privacy fatigue, response costs to privacy violations are based on exact experiences in users' memories. Individuals who have had more privacy invasions have more experience dealing with the negative consequences of privacy violations. Thus, whether psychological traits were added or not, the effect of privacy-invasive experiences on response costs would not be strengthened or weakened.

Meanwhile, this study has proven a moderated mediation model investigating the moderating role of regulatory focus in mediating “privacy invasion experiences—privacy fatigue—privacy protection intentions.” The results indicated that, as individuals tend to be prevention focused, privacy invasion experiences affect individuals' privacy protection intentions through the mediating role of privacy fatigue; specifically, the more they tend to be prevention focused, the stronger their privacy fatigue and the weaker their privacy protection intentions. Therefore, interventions for privacy fatigue (e.g., improving media literacy, creating a better online environment, and more) can be used to enhance social media users' privacy protection intentions (Bucher et al., 2013 ; Agozie and Kaya, 2021 ). In particular, focusing on social media users who tend to be prevention focused is crucial.

5.3. Implication

From a theoretical perspective, our study found a mechanism for influencing privacy-protective behavior based on an extension of the protective motivation theory. Protection motivation theory is a fear-based theory. We used our experiences with social media privacy invasions as a source of fear. Based on this, we found that these experiences were associated with individuals' privacy protection intentions. We explained the mechanism through the mediating variable of response costs, which is also consistent with previous findings (Chen et al., 2016 ).

More importantly, however, in response to what previous researchers have argued is an emotional factor that traditional protection motivation theory ignores (Mousavi et al., 2020 ), our study extended traditional protection motivation theory to include privacy fatigue as a factor and verified that fatigue significantly reduces social media users' privacy protection intentions. The introduction of “privacy fatigue” can better explain why occasional privacy invasion experiences do not cause privacy-protective behaviors, which is another possible explanation for the privacy paradox in addition to the traditional privacy calculus theory. The introduction of “privacy fatigue” has also inspired researchers to pay attention to individual emotions in privacy research. This study also compared differences in privacy protection intentions among social media users of different regulatory focus types, which are mainly caused by fatigue rather than response costs. By combining privacy fatigue and regulatory focus, it was found that not all subjects felt the same level of privacy fatigue after experiencing privacy invasion. This study also expanded the application of both privacy fatigue and regulatory focus theories and built a bridge between online privacy research and regulatory focus theory.

In addition to the aforementioned implications for research and theory, the findings also have some useful, practical implications. First of all, the findings of this piece ask for measures to reduce privacy invasion on social media. (a) Reducing the incidence of privacy violations at their root requires improving the current online privacy environment on social media platforms. We call on the government to strengthen the regulation of online privacy and social media platforms to reinforce the protection of users' privacy. To a large extent, users' personal information should not be misused. (b) From the social media agent perspective, relevant studies mentioned that content relevance perceived by online users could mitigate the negative relations between privacy invasion and continuous use intention (Zhu and Chang, 2016 ). Social media agents should improve their efficiency in using qualified personal information, giving users a smoother experience on online platforms.

Second, the results show that privacy fatigue could affect users' privacy protection intentions. (c) According to Choi et al. ( 2018 ), users have a tolerance threshold for privacy fatigue. The policy should formulate an acceptable level of privacy protection. Other scholars suggested that online service providers should avoid excessively or unnecessarily collecting personal information and forbid sharing or selling users' personal information strictly with any third party without their permission (Tang et al., 2021 ). (d) Another effective way is to reduce response costs to reduce the costs of protecting one's privacy. For example, social media platforms can optimize privacy interfaces and management tools or provide more effective feedback mechanisms for users. (e) In addition, improving users' privacy literacy (especially for prevention-focused individuals) can also be effective in reducing privacy fatigue (Bucher et al., 2013 ).

Finally, different measures should be applied based on different regulatory-focused users. (f) Social media managers could further classify users into groups based on their psychological characteristics and manage them in accordance with their requirements for the level of privacy protection. Thereby, social media users may have a wider range of choices. Specifically, due to previous privacy invasive experience, prevention-focused individuals tend to feel more privacy fatigue, requiring additional privacy protection features for prevention-focused users. For example, social media platforms could offer specific explanations of privacy protection technologies to increase prevention-focused individuals' trust in privacy protection technologies.

5.4. Limitations and future directions

There are still some limitations present in this article. Firstly, this study solely selected response costs as individuals' cognitive process, whereas threat appraisal was also included in the cognitive process of protection motivation theory, which focused on the potential outcomes of risky behaviors, including perceived vulnerability, perceived severity of the risk, and rewards associated with risky behavior (Prentice-Dunn et al., 2009 ). Future studies could systematically consider the association between these factors and privacy protection intentions. Second, users' perceptions of privacy invasion are different across various social media platforms (e.g., Instagram and Facebook), and this study only applies to a generalized social media context. Future research could pay more attention to the differences among users on different social media platforms (with different functions). Finally, this study did not focus on specific privacy invasion experiences. However, studies pointed out that different types of privacy invasions affect people differently. Moreover, people with different demographical backgrounds, such as cultural backgrounds and gender, would react differently when faced with the same situation (Klein and Helweg-Larsen, 2002 ). Future research can investigate this in more depth through experiments.

6. Conclusion

In conclusion, our findings suggest that social media privacy invasion experiences increase individuals' privacy protection intentions by increasing their response costs, but e increase in privacy fatigue masks this effect. Pivacy fatigue is a barrier to increasing social media users' willingness to protect their privacy, which explains why users do not seem to show a stronger willingness to protect their privacy when privacy invasion is a growing problem in social networks nowadays. Our study also revealed a different level of fatigue that individuals with different levels of regulatory focus exhibit when faced with the same level of privacy invasion experience. In particular, prevention-focused social media users are more likely to become fatigued. Therefore, social media agents should pay special attention to these individuals because they may be particularly vulnerable to privacy violations. Furthermore, the current research on privacy fatigue has yet to be expanded, and future researchers can add to it.

Our theoretical analysis and empirical results further emphasize the distinction between individuals, a differentiation that allows researchers to align their analyses with theoretical hypotheses more tightly. This applies not only to research on the effects of privacy invasion experiences on privacy behavior but also to exploring other privacy topics. Therefore, we recommend that future privacy research be more human-oriented, which will also benefit the current “hierarchical governance” of the Internet privacy issue.

Data availability statement

Ethics statement.

This study was approved by the Academic Committee of the School of Journalism and Communication at Xiamen University, and we carefully verified that we complied strictly with the ethical guidelines.

Author contributions

CG is responsible for the overall research design, thesis writing, collation of the questionnaire, and data analysis. SC and ML are responsible for the guidance. JW is responsible for the proofreading and article touch-up. All authors contributed to the article and approved the submitted version.

Acknowledgments

The authors thank all the participants of this study. The participants were all informed about the purpose and content of the study and voluntarily agreed to participate. The participants were able to stop participating at any time without penalty.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2022.1031592/full#supplementary-material

  • Agozie D. Q., Kaya T. (2021). Discerning the effect of privacy information transparency on privacy fatigue in e-government . Govern. Inf. Q . 38, 101601. 10.1016/j.giq.2021.101601 [ CrossRef ] [ Google Scholar ]
  • Baby A., Kannammal A. (2020). Network Path Analysis for developing an enhanced TAM model: A user-centric e-learning perspective . Comput. Hum. Behav . 107, 24. 10.1016/j.chb.2019.07.024 [ CrossRef ] [ Google Scholar ]
  • Bansal G., Zahedi F. M., Gefen D. (2010). The impact of personal dispositions on information sensitivity, privacy concern and trust in disclosing health information online . Decision Support Syst . 49 , 138–150. 10.1016/j.dss.2010.01.010 [ CrossRef ] [ Google Scholar ]
  • Baruh L., Secinti E., Cemalcilar Z. (2017). Online privacy concerns and privacy management: a meta-analytical review . J. Commun. 67 , 26–53. 10.1111/jcom.12276 [ CrossRef ] [ Google Scholar ]
  • Belanger F., Crossler R. E. (2019). Dealing with digital traces: Understanding protective behaviors on mobile devices . J. Strat. Inf. Syst. 28 , 34–49. 10.1016/j.jsis.2018.11.002 [ CrossRef ] [ Google Scholar ]
  • Benbasat I., Dexter A. S. (1982). Individual differences in the use of decision support aids . J. Account. Res . 20 , 1–11. 10.2307/2490759 [ CrossRef ] [ Google Scholar ]
  • Boksem M. A. S., Tops M. (2008). Mental fatigue: costs and benefits . Brain Res. Rev. 59 , 125–139. 10.1016/j.brainresrev.2008.07.001 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bucher E., Fieseler C., Suphan A. (2013). The stress potential of social media in the workplace . Inf. Commun. Soc . 16 , 1639–1667. 10.1080/1369118X.2012.710245 [ CrossRef ] [ Google Scholar ]
  • Chen H., Beaudoin C. E., Hong T. (2015). Teen online information disclosure: Empirical testing of a protection motivation and social capital model . J. Assoc. Inf. Sci. Technol . 67 , 2871–2881. 10.1002/asi.23567 [ CrossRef ] [ Google Scholar ]
  • Chen H., Beaudoin C. E., Hong T. (2016). Protecting oneself online: The effects of negative privacy experiences on privacy protective behaviors . J. Mass Commun. Q. . 93 , 409–429. 10.1177/1077699016640224 [ CrossRef ] [ Google Scholar ]
  • Cho H., Roh S., Park B. (2019). Of promoting networking and protecting privacy: effects of defaults and regulatory focus on social media users' preference settings . Comput. Hum. Behav . 101 , 1–13. 10.1016/j.chb.2019.07.001 [ CrossRef ] [ Google Scholar ]
  • Choi H., Park J., Jung Y. (2018). The role of privacy fatigue in online privacy behavior . Comput. Hum. Behav . 81 , 42–51. 10.1016/j.chb.2017.12.001 [ CrossRef ] [ Google Scholar ]
  • Cui Q., Yin C. Y., Lu H. L. (2014). The reaction of consumers to others' assessments under different social distance . Chin. J. Manage . 11 , 1396–1402. [ Google Scholar ]
  • Dhir A., Kaur P., Chen S., Pallesen S. (2019). Antecedents and consequences of social media fatigue . Int. J. Inf. Manage . 8 , 193–202. 10.1016/j.ijinfomgt.2019.05.021 [ CrossRef ] [ Google Scholar ]
  • Floyd D. L., Prentice-Dunn S., Rogers R. W. A. (2000). meta-analysis of research on protection motivation theory . J. Appl. Soc. Psychol . 30 , 407–429. 10.1111/j.1559-1816.2000.tb02323.x [ CrossRef ] [ Google Scholar ]
  • Fornell C., Larcker D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error . J. Market. Res . 18 , 39–50. 10.1177/002224378101800104 [ CrossRef ] [ Google Scholar ]
  • Friestad M., Thorson E. (1985). The Role of Emotion in Memory for Television Commercials . Washington, DC: Educational Resources Information Center. [ Google Scholar ]
  • Furnell S., Thomson K. L. (2009). Recognizing and addressing “security fatigue” . Comput. Fraud Secur . 11 , 7–11. 70139-3 10.1016/S1361-3723(09)70139-3 [ CrossRef ] [ Google Scholar ]
  • Hair J. F., Ringle C. M., Gudergan S. P. (2019). Partial least squares structural equation modeling-based discrete choice modeling: an illustration in modeling retailer choice . Bus. Res . 12 , 115–142. 10.1007/s40685-018-0072-4 [ CrossRef ] [ Google Scholar ]
  • Hargittai E., Marwick A. (2016). “What can I really do?” Explaining the privacy paradox with online apathy . Int. J. Commun. 10 , 21. 1932–8036/20160005. [ Google Scholar ]
  • Higgins E. T. (1997). Beyond pleasure and pain . Am. Psychol . 52 , 1280–1300. 10.1037/0003-066X.52.12.1280 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Higgins E. T. (2002). How self-regulation creates distinct values: the case of promotion and prevention decision making . J. Consum. Psychol . 12 , 177–191. 10.1207/S15327663JCP1203_01 [ CrossRef ] [ Google Scholar ]
  • Hsu C. L., Park S. J., Park H. W. (2013). Political discourse among key Twitter users: the case of Sejong city in South Korea . J. Contemp. Eastern Asia . 12 , 65–79. 10.17477/jcea.2013.12.1.065 [ CrossRef ] [ Google Scholar ]
  • Jin S. A. A. (2012). To disclose or not to disclose, that is the question: A structural equation modeling approach to communication privacy management in e-health . Comput. Hum. Behav . 28 , 69–77. 10.1016/j.chb.2011.08.012 [ CrossRef ] [ Google Scholar ]
  • Jones J. L., Leary M. R. (1994). Effects of appearance-based admonitions against sun exposure on tanning intentions in young-adults . Health Psychol. 13 , 86–90. 10.1037/0278-6133.13.1.86 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Juhee K. Eric J. The Market Effect of Healthcare Security: Do Patients Care About Data Breaches? (2018). Available online at: https//www.econinfosec.org/archive/weis2015/papers/WEIS_2015_kwon.pdf (accessed October 30, 2018).
  • Keith M. J., Maynes C., Lowry P. B., Babb J. (2014). “Privacy fatigue: the effect of privacy control complexity on consumer electronic information disclosure,” in International Conference on Information Systems (ICIS 2014) , Auckland , 14–17. [ Google Scholar ]
  • Kisekka V., Giboney J. S. (2018). The effectiveness of health care information technologies: evaluation of trust, security beliefs, and privacy as determinants of health care outcomes . J. Med. Int. Res . 20, 9014. 10.2196/jmir.9014 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Klein C. T., Helweg-Larsen M. (2002). Perceived control and the optimistic bias: a meta-analytic review . Psychol. Health . 17 , 437–446. 10.1080/0887044022000004920 [ CrossRef ] [ Google Scholar ]
  • Lai Y. L., Hui K. L. (2006). “Internet opt-in and opt-out: Investigating the roles of frames, defaults and privacy concerns,” in Proceedings of the 2006 ACM SIGMIS CPR Conference on Computer Personnel Research . New York, NY: ACM , 253–263. [ Google Scholar ]
  • Lam L. W. (2012). Impact of competitiveness on salespeople's commitment and performance . J. Bus. Res. 65 , 1328–1334. 10.1016/j.jbusres.2011.10.026 [ CrossRef ] [ Google Scholar ]
  • Li H., Wu J., Gao Y., Shi Y. (2016). Examining individuals' adoption of healthcare wearable devices: an empirical study from privacy calculus perspective . Int. J. Med. Inf. 88 , 8–17. 10.1016/j.ijmedinf.2015.12.010 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Li P., Cho H., Goh Z. H. (2019). Unpacking the process of privacy management and self-disclosure from the perspectives of regulatory focus and privacy calculus . Telematic. Inf. 41 , 114–125. 10.1016/j.tele.2019.04.006 [ CrossRef ] [ Google Scholar ]
  • Li X. (2008). Third-person effect, optimistic bias, and sufficiency resource in Internet use . J. Commun . 58 , 568–587. 10.1111/j.1460-2466.2008.00400.x [ CrossRef ] [ Google Scholar ]
  • Liang H., Xue Y. L. (2010). Understanding security behaviors in personal computer usage: a threat avoidance perspective . J. Assoc. Inf. Syst . 11 , 394–413. 10.17705/1jais.00232 [ CrossRef ] [ Google Scholar ]
  • Mao H., Bao T., Shen X., Li Q., Seluzicki C., Im E. O., et al.. (2018). Prevalence and risk factors for fatigue among breast cancer survivors on aromatase inhibitors . Eur. J. Cancer . 101 , 47–54. 10.1016/j.ejca.2018.06.009 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McLeod A., Dolezel D. (2022). Information security policy non-compliance: can capitulation theory explain user behaviors? Comput. Secur . 112, 102526. 10.1016/j.cose.2021.102526 [ CrossRef ] [ Google Scholar ]
  • Mohamed N., Ahmad I. H. (2012). Information privacy concerns, antecedents and privacy measure use in social networking sites: evidence from Malaysia . Comput. Hum. Behav . 28 , 2366–2375. 10.1016/j.chb.2012.07.008 [ CrossRef ] [ Google Scholar ]
  • Mousavi R., Chen R., Kim D. J., Chen K. (2020). Effectiveness of privacy assurance mechanisms in users' privacy protection on social networking sites from the perspective of protection motivation theory . Decision Supp. Syst . 135, 113323. 10.1016/j.dss.2020.113323 [ CrossRef ] [ Google Scholar ]
  • Oh J., Lee U., Lee K. (2019). Privacy fatigue in the internet of things (IoT) environment . INPRA 6 , 21–34. [ Google Scholar ]
  • Okazaki S., Li H., Hirose M. (2009). Consumer privacy concerns and preference for degree of regulatory control . J. Adv. 38 , 63–77. 10.2753/JOA0091-3367380405 [ CrossRef ] [ Google Scholar ]
  • Ong A. D., Bergeman C. S., Bisconti T. L., Wallace K. A. (2006). Psychological resilience, positive emotions, and successful adaptation to stress in later life . J. Pers. Soc. Psychol. 91 , 730. 10.1037/0022-3514.91.4.730 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pavlou P. A., Gefen D. (2005). Psychological contract violation in online marketplaces: antecedents, consequences, and moderating role . Inf. Syst. Res. 16 , 372–399. 10.1287/isre.1050.0065 [ CrossRef ] [ Google Scholar ]
  • Petronio S. (2002). Boundaries of Privacy: Dialectics of Disclosure . Albany, NY: State University of New York Press. [ Google Scholar ]
  • Piper B. F., Lindsey A. M., Dodd M. J. (1987). Fatigue mechanisms in cancer patients: developing nursing theory . Oncol. Nurs. Forum . 14, 17. [ PubMed ] [ Google Scholar ]
  • Prentice-Dunn S., Mcmath B. F., Cramer R. J. (2009). Protection motivation theory and stages of change in sun protective behavior . J. Health Psychol . 14 , 297–305. 10.1177/1359105308100214 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Price B. A., Adam K., Nuseibeh B. (2005). Keeping ubiquitous computing to yourself: a practical model for user control of privacy . Int. J. Hum. Comput. Stu. 63 , 228–253. 10.1016/j.ijhcs.2005.04.008 [ CrossRef ] [ Google Scholar ]
  • Ravindran T., Yeow Kuan A. C., Hoe Lian D. G. (2014). Antecedents and effects of social network fatigue . J. Assoc. Inf. Sci. Technol . 65 , 2306–2320. 10.1002/asi.23122 [ CrossRef ] [ Google Scholar ]
  • Revell T. (2019). Facebook Must Come Clean and Hand Over Election Campaign Data. New Scientist . Available online at: https://www.newscientist.com/article/mg24332472-300-face-book-must-come-clean-and-hand-over-election-campaign-data/ (accessed September 11, 2019).
  • Rogers R. W. A. (1975). protection motivation theory of fear appeals and attitude change . J. Psychol . 91 , 93–114. 10.1080/00223980.1975.9915803 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sheng N., Yang C., Han L., Jou M. (2022). Too much overload and concerns: antecedents of social media fatigue and the mediating role of emotional exhaustion . Comput. Hum. Behav. 139 , 107500. 10.1016/j.chb.2022.107500 [ CrossRef ] [ Google Scholar ]
  • Su P., Wang L., Yan J. (2018). How users' internet experience affects the adoption of mobile payment: a mediation model . Technol. Anal. Strat. Manage . 30 , 186–197. 10.1080/09537325.2017.1297788 [ CrossRef ] [ Google Scholar ]
  • Tang J., Akram U., Shi W. (2021). Why people need privacy? The role of privacy fatigue in app users' intention to disclose privacy: based on personality traits . J. Ent. Inf. Manage . 34 , 1097–1120. 10.1108/JEIM-03-2020-0088 [ CrossRef ] [ Google Scholar ]
  • Tanner J. F., Hunt J. B., Eppright D. R. (1991). The protection motivation model: a normative model of fear appeals . J. Market . 55 , 36–45. 10.1177/002224299105500304 [ CrossRef ] [ Google Scholar ]
  • Tian X., Chen L., Zhang X. (2022). The role of privacy fatigue in privacy paradox: a psm and heterogeneity analysis . Appl. Sci. 12 , 9702. 10.3390/app12199702 [ CrossRef ] [ Google Scholar ]
  • Tversky A., Kahneman D. (1974). Judgement under uncertainty: heuristics and biases . Science . 185 , 1124–1131. 10.1126/science.185.4157.1124 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang L., Yan J., Lin J., Cui W. (2017). Let the users tell the truth: Self-disclosure intention and self-disclosure honesty in mobile social networking . Int. J. Inf. Manage . 37 , 1428–1440. 10.1016/j.ijinfomgt.2016.10.006 [ CrossRef ] [ Google Scholar ]
  • Wichstrom L. (1994). Predictors of Norwegian adolescents sunbathing and use of sunscreen . Health Psychol. 13 , 412–420. 10.1037/0278-6133.13.5.412 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wirtz J., Lwin M. O. (2009). Regulatory focus theory, trust, and privacy concern . J. Serv. Res . 12 , 190–207. 10.1177/1094670509335772 [ CrossRef ] [ Google Scholar ]
  • Wu Z., Xie J., Lian X., Pan J. (2019). A privacy protection approach for XML-based archives management in a cloud environment . Electr. Lib . 37 , 970–983. 10.1108/EL-05-2019-0127 [ CrossRef ] [ Google Scholar ]
  • Xiao L., Mou J. (2019). Social media fatigue -Technological antecedents and the moderating roles of personality traits: the case of WeChat . Comput. Hum. Behav . 101 , 297–310. 10.1016/j.chb.2019.08.001 [ CrossRef ] [ Google Scholar ]
  • Xu F., Michael K., Chen X. (2013). Factors affecting privacy disclosure on social network sites: an integrated model . Electr. Comm. Res 13 , 151–168. 10.1007/s10660-013-9111-6 [ CrossRef ] [ Google Scholar ]
  • Yoon C., Hwang J. W., Kim R. (2012). Exploring factors that influence students' behaviors in information security . J. Inf. Syst. Educ . 23 , 407–415. [ Google Scholar ]
  • Youn S., Kim S. (2019). Newsfeed native advertising on Facebook. Young millennials' knowledge, pet peeves, reactance and ad avoidance . Int. J. Adv . 38 , 651–683. 10.1080/02650487.2019.1575109 [ CrossRef ] [ Google Scholar ]
  • Zhang Y., He W., Peng L. (2022). How perceived pressure affects users' social media fatigue behavior: a case on WeChat . J. Comput. Inf. Syst . 62 , 337–348. 10.1080/08874417.2020.1824596 [ CrossRef ] [ Google Scholar ]
  • Zhou Y., Schaub F. (2018). “Concern but no action: consumers, reactions to the equifax data breach,” in Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems , Montreal, QC , 22–26. [ Google Scholar ]
  • Zhu Y.Q., Chang J. H. (2016). The key role of relevance in personalized advertisement: examining its impact on perceptions of privacy invasion, self-awareness, and continuous use intentions . Comput. Hum. Behav . 65 , 442–447. 10.1016/j.chb.2016.08.048 [ CrossRef ] [ Google Scholar ]
  • Listening Tests
  • Academic Tests
  • General Tests
  • IELTS Writing Checker
  • IELTS Writing Samples
  • Speaking Club
  • IELTS AI Speaking Test Simulator
  • Latest Topics
  • Vocabularying
  • 2024 © IELTS 69

Does Social Media Violate Our Privacy?

This is funny writing

IELTS essay Does Social Media Violate Our Privacy?

  • Structure your answers in logical paragraphs
  • ? One main idea per paragraph
  • Include an introduction and conclusion
  • Support main points with an explanation and then an example
  • Use cohesive linking words accurately and appropriately
  • Vary your linking phrases using synonyms
  • Try to vary your vocabulary using accurate synonyms
  • Use less common question specific words that accurately convey meaning
  • Check your work for spelling and word formation mistakes
  • Use a variety of complex and simple sentences
  • Check your writing for errors
  • Answer all parts of the question
  • ? Present relevant ideas
  • Fully explain these ideas
  • Support ideas with relevant, specific examples
  • ? Currently is not available
  • Meet the criteria
  • Doesn't meet the criteria
  • 5 band People Are Becoming Too Dependent on the Phone and Internet, is it a Positive or a Negative Development? Technology has changed the way we live. From communication, to accomplishing daily tasks and even when it comes to education and learning. With just a click of a button, we can do various tasks all at once and have access to valuable information all over the world. We can pay bills, shop groceries, ...
  • 5.5 band There are conflicting views about the most suitable place for people to live. While some believe that living in urban is beneficial, I would argue that there are more advantages brought when living in the country. There are conflicting views about the most suitable place for people to live. While some believe that living in urban is beneficial, I would argue that there are more advantages brought when living in the country. There are many reasons why some people believe that finding accommodation in an urban ...
  • Language is the blood of the soul into which thoughts run and out of which they grow. Oliver Wendell Holmes
  • 6 band An ambition is positive or negative? Nowdays, people believe that being an ambitious person is an inappropriate attitude. From my viewpoint, in this competitive world we need this attribute to rescue our life's condition. For example, this behavior gives us a power to achieve what we want. We have to carefully use this particular ethic ...
  • 6 band particular are under 3 to nowadays due to the fact that we are living in a global village. what do you think can be done to protect a society traditional values and Culture Nowadays, by growing social media from all over the word specific and old cultures are in danger of disappearing. It is true that developed societies tend to be integration by copying each other. it cause numerous Terminator consequences. This problem can solved by essential Solutions. For instance, ...
  • Knowledge of languages is the doorway to wisdom. Roger Bacon
  • 5 band You recently returned to Toronto from San Francisco. But your luggage got misplaced. Dear Airport manager, I recently returned to Toronto from San Francisco via Vancouver on flight A220 (Air Canada) and A124 respectively (Air Canada) which arrived on 24th Feb at 20: 30 at Toronto airport. My luggage which supposed to check all the way to my final destination, baggage tag number-A12 ...
  • 5 band Let's go bats and play here It is known for everyone that bats are perceived to be the mammals which hunt at nights. However, they have an engineering problem: they can't find their ways at night. Although some people might disagree with the opinion and state that they should change their habits as nocturnal animals, it is an ...
  • The most intimate temper of a people, its deepest soul, is above all in its language. Jules Michelet

IMAGES

  1. Does Social Media Violate Our Privacy

    essay on does social media violate our privacy

  2. Does social media violate our privacy

    essay on does social media violate our privacy

  3. 4.docx

    essay on does social media violate our privacy

  4. Social Media and Violence

    essay on does social media violate our privacy

  5. How Social Media Violate Our Privacy

    essay on does social media violate our privacy

  6. Does social media violate our privacy.docx

    essay on does social media violate our privacy

VIDEO

  1. How does social media affect your BPD? Please comment below 🖤🤍#myborderline #menshealth #borderline

  2. What impact does social media have on personal privacy

  3. Does Age Verification on Social Media Violate Our Rights? #shorts #socialmedia #rights

  4. The Implications of Social Media on Public Policy, on State of Affairs

  5. The Importance Of Data Privacy: What You Need To Know #cybersecurity #dataprotection

  6. Ch10 : Ethical issues in social media

COMMENTS

  1. Social Media Is a Threat to Privacy, Essay Example

    Social media users usually post private information as part of the process of knowing one another. Since social media is associated with having a large number of users unknown to the client, there is an increased risk of exposing personal details to cybercriminals. Social media is a threat to privacy. Social media has increased privacy concerns ...

  2. How Americans feel about social media and privacy

    Overall, a 2014 survey found that 91% of Americans "agree" or "strongly agree" that people have lost control over how personal information is collected and used by all kinds of entities. Some 80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms, and ...

  3. Social Media and Privacy: The Dangers and Privacy Issues

    According to Cha (2011), security and privacy problems emanating from social media are classified into behavioral and technical issues. One of the problems associated with social media is the invasion of privacy. This has been attributed to technological advancements that have made invasion of privacy not only feasible, but also achievable.

  4. Privacy Risks and Social Media

    The core privacy risks on platforms like Facebook and Twitter include data collection, targeted advertising, tracking user behavior, security breaches and more. When signing up for social media, users are typically required to provide personal information like name, email, birthdate, interests, location and more.

  5. The Assault on Our Privacy Is Being Conducted in Private

    The assaults on our privacy have become not only more secretive but also far more efficient. Americans once blanched at government efforts to sweep up data, including through the Patriot Act after ...

  6. The Battle for Digital Privacy Is Reshaping the Internet

    Now that system, which ballooned into a $350 billion digital ad industry, is being dismantled. Driven by online privacy fears, Apple and Google have started revamping the rules around online data ...

  7. Social Media Users' Legal Consciousness About Privacy

    Drawing on the concept of legal consciousness, this article investigates through focus group interviews, the ways in which social media users make sense of privacy as a right and the ways in which they experience and respond to challenges to privacy. Our research aims to explore what role, if any, law—both private and public policy—plays in ...

  8. Social Media Privacy

    For more than a decade, EPIC has advocated before Congress, the courts, and the Federal Trade Commission to protect the privacy of social media users. Beginning in 2008, EPIC warned of the exact problem that would later lead to the Facebook Cambridge Analytica scandal. In Senate testimony in 2008, then-EPIC President Marc Rotenberg stated that ...

  9. Social media and its effect on privacy

    articles, magazine articles, and research papers pertaining to social media to determine what effects social media has on the user's privacy and how much trust should be placed in social media networks such as Facebook. It provides a comprehensive view of the most used social media networks in 2012 and offers methods and suggestions for users ...

  10. Privacy in Social Media

    Abstract. Most people's immediate concern about privacy in social media, and about the internet more generally, relates to data protection. People fear that information they post on various platforms is potentially abused by corporate entities, governments, or even criminals, in all sorts of nefarious ways. The main premise of this chapter is ...

  11. Social Media and Privacy

    Section Highlights. Information disclosure privacy issues have been a dominant focus in online technologies and the primary focus for social media. It focuses on access to data and defining public vs. private disclosures.It emphasizes user control over who sees what. With so many people from different social circles able to access a user's social media content, the issues of context collapse ...

  12. Why protecting privacy is a losing game today—and how to ...

    July 12, 2018. Recent congressional hearings and data breaches have prompted more legislators and business leaders to say the time for broad federal privacy legislation has come. Cameron Kerry ...

  13. Privacy Issues with Social Media

    Hence, I assert that the government should take action to protect citizen's privacy on social media as soon as possible because we are reaching a point in technology where the fine line between on-screen and off-screen are becoming meshed together. Social media surveillance from government agencies is another surrounding controversial issue.

  14. Full article: Ethical concerns about social media privacy policies: do

    Introduction. With 4.76 billion (59.4%) of the global population using social media (Petrosyan, Citation 2023) and over 46% of the world's population logging on to a Meta Footnote 1 product monthly (Meta, Citation 2022), social media is ubiquitous and habitual (Bartoli et al., Citation 2022; Geeling & Brown, Citation 2019).In 2022 alone, there were over 500 million downloads of the image ...

  15. On Privacy and Security in Social Media

    This paper provides a comprehensive study of privacy and security issues in social media, covering various aspects such as user behavior, data collection, legal frameworks, and technical solutions ...

  16. Social Media & Privacy: A Facebook Case Study

    Globally, the website h as over 968 million. daily users and 1.49 billion monthly users, with nearl y 844 million mobile daily users and. 3.31 billion mobile monthly users ( See Figure 1 ...

  17. Teens, Social Media, and Privacy

    74% of teen social media users have deleted people from their network or friends' list; 58% have blocked people on social media sites. Given the size and composition of teens' networks, friend curation is also an integral part of privacy and reputation management for social media-using teens.

  18. 6 Common Social Media Privacy Issues

    Data protection issues and loopholes in privacy controls can put user information at risk when using social media. Other social media privacy issues include the following. 1. Data mining for identity theft. Scammers do not need a great deal of information to steal someone's identity.

  19. Full article: Online Privacy Breaches, Offline Consequences

    This is a critical point, as there are few alternatives to using many of services (e.g., search engines) that strip people of their privacy. Similarly, given that social media frequency leads to stronger social connections, and ultimately well-being (Roberts & David, Citation 2020), quitting social media may lead to a cut in ties with friends ...

  20. Can We Protect Our Privacy on Social Media?

    Mark Engler. There's a hidden cost to our free accounts on Facebook, Instagram, Snapchat, and other social media platforms: our privacy. In this lesson, students learn about and discuss how corporations make a profit from our data, potential policy solutions, and how young people are making their own decisions about online privacy. Current Issues.

  21. Privacy Issues Concerning Social Media

    Ever since social media was introduced as such a necessary and almost vital part of our lives, several concerns have risen about the boundaries at which we must draw in terms of our privacy. The mass of information users pour into this endless stream of data is most frequently being misused for profit-oriented purposes.

  22. What is The Impact of Social Media on Privacy?

    As social media has become more popular, people have become increasingly worried about the privacy implications of using these platforms. In recent years, there have been a number of high-profile cases in which users' private information has been leaked or stolen as a result of using social media. Additionally, many people are concerned about the way that social media companies collect and ...

  23. Research on the influence mechanism of privacy invasion experiences

    From a theoretical perspective, our study found a mechanism for influencing privacy-protective behavior based on an extension of the protective motivation theory. Protection motivation theory is a fear-based theory. We used our experiences with social media privacy invasions as a source of fear.

  24. IELTS essay Does Social Media Violate Our Privacy?

    Data privacy and violation have become an alarming concern over recent years as the information shared on social media, becomes compromised. It poses a greater risk as users become susceptible and vulnerable to cyberattacks and data breaches. Cases of identity theft, leaking of personal information have risen over the years, and the most common ...

  25. Applied Sciences

    Since 2021, China's promotion of common prosperity has captured global attention and sparked considerable debate. Yet, scholarly examination of the Chinese public's attitudes toward this policy, which is crucial for guiding China's strategic directions, remains limited. To address this gap, this paper collects 256,233 Sina Weibo posts from 2021 to 2023 and utilizes text mining methods ...