Opinion: No, Censoring Dangerous Opinions Isn’t a Rights Violation

"Thinker thinks about how to take sun burst shot" by davidyuweb is licensed under CC BY-NC 2.0.

To what extent are social media platforms responsible for the content their users put out? Should harmful ideologies be given a platform? Deplatforming is a method platforms are using to permanently ban public figures and extremist groups from spreading ideas the platform deems to be harmful. By taking away users’ access to the site, platforms hope to strip away their power to influence others with their potentially dangerous ideas. In recent years, we have seen multiple high-profile cases of sites like Twitter, Instagram and Facebook using this strategy to keep hate speech off their respective platforms, garnering both approval and condemnation from netizens.

The main criticism of deplatforming is that it violates one’s right to free speech. In what is now one of the most well-known cases of deplatforming, former President Donald Trump was banned on both Twitter and Facebook on Jan. 8, 2021, following the attack on the Capitol that occurred on Jan. 6. Twitter explained in a blog post that his removal from the platform was a preventative measure in case Trump’s supporters took the series of tweets he had posted as further incitement for violence.

Trump’s suspension brought deplatforming as a concept to the forefront of discussions. Many expressed outrage, arguing that removing Trump was censorship and that it infringes on the First Amendment which protects Americans’ right to free speech. Outlawing users solely based on what the platform deems as “dangerous” concerned the public. Some conservatives also believe that platforms hold a liberal bias, heavily influencing which opinions the platforms are choosing to suppress.

Popular mass media platforms like Twitter and Facebook have become a place for global congregation and discussion, making online discourse feel like it is occurring in a public space. However, no matter how much online social mediums feel like a public forum, they are private organizations and so it is not possible for them to violate the First Amendment. The First Amendment only protects citizens from government interference, so privately-owned social media platforms do not have to uphold the right to free speech to the same degree.

So why do social media platforms choose to deplatform users? The rise of social media is undeniably attributed to the online communities it houses. Users from around the world are able to find others with similar ideologies and interests through hashtags and algorithms that take note of user activity. Extremist groups like ISIS have utilized social media’s ability to rapidly spread propaganda and generate more supporters.

In 2015, the Brookings Institute published a study on the deplatforming of ISIS “influencers.” The study showed a decrease in ISIS content being put out, limiting their influence and recruitment efforts. When users tried to come back, they had a hard time rebuilding their following, forcing them to move into more obscure platforms. Forcing extremist groups to move into lesser-known platforms is extremely effective, leading them to have less access to the general public and limit communications between group members in some cases.

Freedom of expression is a fundamental human right that should be upheld by everyone. However, censorship of dangerous ideologies must also be an utmost priority of social media platforms. In the digital age, social media has become a prevalent part of our daily lives, and the content we see on platforms has the ability to influence and change our worldviews. Social media companies should be responsible for the continuous moderation of content; deplatforming should be used to stop harmful ideas from garnering attention. Nonetheless, platforms should be held accountable when they suspend an individual and explain the reasoning behind their decision.