Social media are often used to connect to friends and families, but these platforms are filled with extremist contents these days that promote terrorism, pedophilia, hate, aggravated insults, and discrimination, among others.

France new law Facebook social media
(Photo : Pexels)
The new law mandates social media like Facebook and Instagram to remove extremist contents within an hour or pay a hefty fine.

France Passes a New Law

Platforms like Facebook and Instagram find it hard to track these contents, more so remove them.

To help fight hateful content, France has now passed a new law that forces these major social media platforms to remove any content relating to terrorism and pedophilia within an hour of notification, according to a report by The Next Web.

Moreover, the law mandates them also to delete any hateful content that promotes sexual abuse, sexual discrimination, and aggravated insults within 24 hours after being notified.

If these social media platforms fail to comply, they will pay a hefty fine of 1.2 million euros or around $1.35 million, which was increased from the initial amount that was only 37,500 euros or about $40,534.

Read Also: [CYBERATTACK] Lady Gaga and Nicki Minaj's Law Firms Targeted by Hackers' Extortion Plot

Not a Good Thing for Freedom of Speech?

Although the move is promising in ensuring that social media platforms are a safe space for every user, especially those who are still young, the law was still met with some criticism from some privacy advocates.

One digital rights organization known as La Quadrature du Net said that the move might actually throttle the people's right to freedom of speech, saying that the new law would only give the authorities unprecedented power on what they think is terrorism.

They think that the police will use that power against demonstrators or that they'll use it to block any website all across the country and say that they have violated the 24-hour rule.

Nevertheless, the social media giant, Facebook, said that this is a good idea.

A Facebook spokesperson said in a statement that the new law could help in their fight against extremist content, which is their top priority for many years already.

"We have clear rules against it and have invested in people and technology to better identify and remove it. Regulation is important in helping combat this type of [extremist] content. We will work closely with the Conseil supérieur de l'audiovisuel and other stakeholders on the implementation of this law," the spokesperson said.

Besides Facebook and Instagram, other social media platforms like Twitter and internet giant Google are also affected by the new law, but as of now, they have not commented on the issue yet.

France is also not the only country that has decided to make such a move against hateful and extremist contents as the European Union previously proposed such guidelines as well.

Read Also: Protect Your Personal Data from Leaking Through Google Firebase; Here's How

Moderators Developed PTSD

Due to these types of contents, content moderators hired by Facebook from third-party moderators developed post-traumatic stress disorder (PTSD) from having to watch videos and contents related to suicide and rape, among others, as reported by The Verge.

Although the social media platform uses AI to remove harmful content, they also hired people to check the contents reported to them.

Now, the company has agreed to pay $52 million, according to BBC, to the content moderators as compensation for the mental health issues they have developed because of the job.

ⓒ 2021 All rights reserved. Do not reproduce without permission.