2018 has seen countries around the world become stricter in its application of filtering undesirable online content.

Although “filtering” and “censoring” have widely different connotations, with the former suggesting it benefits the user and the latter implying restriction, they both omit or inhibit certain information.

How we perceive censorship

If we were to think of internet censorship, we might consider Russia, Southeast Asia, and the Middle East, where some sites are banned, and anti-establishment rhetoric is extremely dangerous.

In China, WhatsApp and Facebook are banned outright, and 1.5% of “official” (brands, media, and celebrities’) posts on WeChat, China’s permitted social media app, are censored, with “evidence of automatic review filters preventing posts with certain blacklisted words from being published.” This does not include the posts that were censored before they could even be published: Banned words, such as those relating to outlawed religious practices, cannot even be sent, as an error message will simply appear instead.

The rise of “fake news”

Increasingly, countries’ overt restriction to information is being shaped by content manipulation, which is much more insidious. As part of its censorship, China removes “fake news” posts, which include the false announcement of an actor’s death, an article claiming that KFC and McDonalds are raising birds with hormones that cause them to grow extra legs and wings, and that padded bras cause breast cancer.

At first glance, this seems like a sensible move. Filtering wildly inaccurate news stories would be beneficial so as not to present falsehoods as facts and to prevent confusion. However, as part of China’s control of the media, it indicates a reliance on China’s “verified” news sources. When social media users are used to dealing with suspicious articles, they become more discerning in deciding whether they are to be believed, and without that exposure, Chinese citizens become further dependent on one news source.

The right to post content on any social network means that fake news can spread as people and bots share or boost the content, making something that is fake seem more credible and believable. Facebook has provided a help sheet for users to identify whether a news story is false, with the onus remaining with the user rather than the platform to censor content.

How we censor ourselves and filter our words

We’re used to using “filtering” to mean we are only provided with what we want. We filter water for impurities, cinema listings by our location, and our shopping searches by what we want to buy. The filtered aspects are only going to slow us down, take up time, or have another negative impact, and so there’s no point in including them.

In our personal lives, we also create mental filters, some more consciously than others. We don’t say certain things because they may be inappropriate or cause harm to others; we modify our behaviour to fit in with the social context.

Preferably these considerations would also carry over into our online interactions, although it is apparent that this is not always the case: Online, many people express opinions they wouldn’t in-person, which lowers our personal filters. However, anonymity and lack of social constraints can also encourage toxic disinhibition, whereby users’ empathetic filtering is minimized and they may communicate in ways they know to be offensive or upsetting; e.g. cyberbullying or acting like an online “troll.”

In the UK, a sixteen-year-old girl was fired from her job after labelling it “boring” on Facebook. The company’s managing director argued that, “”Had [she] put up a poster on the staff notice board making the same comments and invited other staff to read it there would have been the same result,” implying that an employee is as responsible for her actions outside of the office as within working hours. Although her speech wasn’t filtered by Facebook as problematic, there were offline consequences that are likely to censor other users from expressing similar sentiments.

Teenagers complaining about a job they find “boring” may seem relatively harmless – a means of letting off steam outside of work. Last year, Disney and YouTube cut their ties with YouTube megastar PewDiePie after he posted anti-Semitic videos, and fellow internet performer Logan Paul was criticized after posting an insensitive video that saw him laughing at the body of a suicide victim.

Paul, however, wasn’t removed from YouTube  altogether, but was instead removed from upcoming YouTube series as well as losing out on the Google Preferred program, which brings in more revenue from ads.

Both PewDiePie (real name Felix Kjellberg) and Logan Paul were affected financially by their artistic expression and the content was removed. Paul removed the suicide video voluntarily, and a petition for his YouTube channel to be deleted has gathered almost 700,000 signatures. YouTube released a statement saying that it “prohibits violent or gory content posted in a shocking, sensational or disrespectful manner. If a video is graphic, it can only remain on the site when supported by appropriate educational or documentary information and in some cases it will be age-gated.”

Platforms are therefore taking responsibility for the content they provide, meaning that there is some filtering of people’s uploads.

How online rhetoric spills into the real world

News stories about troll-induced suicides, where one or multiple people cyberbully another to the point of taking their own life, have become distressingly commonplace. In this case, the target is hounded specifically, either a classmate, acquaintance, or a celebrity as with Charlotte Dawson. However, negative opinions may be directed towards a group or demographic, with similarly dangerous results.

A recent paper found that online hate speech directed towards Germany’s refugee population correlated with attacks against the refugees shortly after; for every four posts critical of the refugees, there was an additional anti-refugee incident, including arson and assault.

In light of its history, Germany takes a particularly strong stance against hate speech and social media platforms are now required to delete illegal posts within twenty-four hours for “clearly illegal” content or a week when it is less easily defined, or pay fines of up to €50 million ($60 million). While this seems a positive step in preserving both mental and physical health of people at risk, it is complicated by finding a resolution between welcoming free speech and prohibiting hate speech, and with such hefty fines involved many platforms are erring on the side of caution rather than risk the fee.

We expect restrictions on “obscene” material, but what is obscene and offensive differs widely from individual to individual. We should take care not to exercise an indelible control over content that is unlikeable while considering the very real effects of bolstering harmful speech and media.