Facebook, YouTube and Twitter are understaffed in their teams that monitor the huge number of accounts featuring extremist content, and UK lawmakers are addressing this issue.
The lawmakers have branded the situation as "alarming" and filed a report chastising the corporations, which they claim are "hiding behind their supranational legal status to pass the parcel of responsibility." They argue that the tech companies refuse to take action even when specific cases are damaging to their brands.
The report, which was filed by the Home Affairs Committee appointed by the House of Commons, points out that the reputation of the companies as responsible actors will crumble if they do not address the issues that make the platforms a cyber-version of the Wild West.
The report highlights that among the social media platforms, Twitter stands out as it has repeatedly failed to weed out extremist content and report it to law enforcement agencies. The lawmakers also point out that the UK is one of the countries most threatened by radicalization and terror spreading via the internet.
The committee acknowledges that between 2015 and 2016 Twitter suspended 125,000 global accounts that were connected to terrorism, but it estimates that this is only a "drop in the ocean." In February, the social media said that unfortunately no tech company is able to devise a "magic algorithm" that can help identify terrorist actions and content on the internet.
Twitter explained that it has north of a hundred employees who are manually perusing posts related to extremism to make its platform a palatable experience. They decide whether to remove the content or to pull the plug on the respective account. Google and Facebook refrained from providing details about the numbers of their "censors."
Several reports surfaced lately about terror groups such as the Islamic State group (ISIS), which is using social media for communications, recruitment and propaganda.
"Its forums, message boards and social media platforms are the lifeblood of Daesh and other terrorist groups," notes Keith Vaz, a Parliament member and chairman of the committee. He goes on to say that the modern front line is the internet.
A recommendation from the lawmakers is that social networking enterprises cooperate closer with the Counter Terrorism Internet Referral Unit (CTIRU), which is a specially designated unit inside the Metro Police.
The committee also suggests that the CTIRU get revamped into a state-of-the-art, central operational hub. In the lawmakers' vision, the unit should work 24/7 to locate threats in order to share pertinent information with different security agencies and officers of the law.
The leader of policy at Facebook UK, Simon Milner, underlines that his company has a zero tolerance policy on terrorism. He explains that Facebook deals "swiftly and robustly" with reports that unveil terrorist content. He goes on to detail that as soon as his employees find evidence of extremist materials, they delete both the content and associated account.
YouTube representatives have affirmed that content that incites to violence is removed, and it also makes a point of killing accounts run by terrorist groups.
In May this year, Facebook, Twitter, Microsoft and Google signed a deal to keep closer tabs on online hate speech from European Union users who use their platforms. The reviewing process takes 24 hours and the reported content might be brought down if it proves to fall in one of the blacklisted areas.
Some companies are facing lawsuits in the U.S. due to the terrorist content from their webpages.
Probably the most famous such legal action was filed by the father of a victim of the Paris terror attack in November. The plaintiff argues that Facebook, Google and Twitter knowingly allowed ISIS to use their platforms as a way to disseminate extremist propaganda, recruit new fighters and raise money for its operations.
A number of proposals came from the UK MPs, such as transforming communities into better communication hubs. The Muslim Council of Britain was urged to do all it can to expose and remove those who spill extremist propaganda in the community of believers, leading to intolerance, hate speech or terrorist actions.