Misinformation and fake news are rampant on social media. That is nothing new. From the flat earth society to anti-vaccine groups, conspiracy theories thrive on a variety of social networking platforms.
But in the age of coronavirus, such misinformation is acutely dangerous.
Despite token efforts to reign in the worst of its users and groups, it has recently been revealed that platforms like Facebook don't just turn a blind eye most of the time. They make a killing from ads targeted specifically at these groups.
Here is the stunning extent to which social media platforms profit from advertising targeting conspiracy theorists and pseudoscience believers. And how they are endangering us all in the process.
Ads, Social Media's Lifeblood.
Facebook is the world's largest social network, with a mind-boggling 2.6 billion active users as of the first quarter of 2020. Coming in a distant but still impressive second is Facebook-owned Instagram, with over 1 billion users.
These platforms are essential channels for businesses to promote their products and services. Social media are an integral part of the marketing strategies for businesses ranging from local florists and village bakeries to travel bloggers and providers of business phone systems.
Leveraging the massive reach of platforms like Facebook and Instagram, businesses can boost their visibility, and acquire valuable social proof of their quality.
Paid ads are a prime way for businesses to do that, and draw an audience of potential customers.
And social media platforms cash in big.
Given these figures, it is not surprising that platforms are extremely reluctant to curb any advertising activities - despite pledging to combat misinformation.
Facebook offers advertisers a massive array of options to target specific demographics.
This goes far beyond the age, sex, or geographical location of a user. It includes their lifestyle, professional occupation, level of income, family situation, general interests, and past purchasing decisions.
Until the end of April 2020, one of the categories that advertisers could use to target their audience was 'interested in pseudoscience'. With a breathtaking 78 million users included in this category.
It was removed only after a public outcry in the midst of the pandemic. Consumer Reports and The Markup both reported on the issue, after having tried to place ads on Facebook and Instagram that disparage social distancing measures, and claim the virus is a hoax. In both cases, the ads were approved within minutes.
But the history of Facebook explicitly allowing advertisers to target fringe groups goes much further back than the pandemic.
In 2017, for example, ProPublica revealed that they succeeded in a trial to target ads at antisemitic groups. The categories used for this endeavor - people interested in 'jew hater', 'how to burn jews' or 'history of "why jews ruin the world"' - were subsequently removed.
Only last year, The Guardian revealed that it was also possible on Facebook and Instagram to specifically target ads at people interested in 'vaccine controversies'. That category included over 900,000 users.
The Fallout of Misinformation Ads
It is impossible to estimate the revenue that social media platforms have generated from targeting ads specifically at pseudoscience believers and conspiracy theorists due to a lack of disclosed data.
But the fallout of misinformation campaigns has been starkly visible - especially in the case of the coronavirus.
In Britain, for example, a conspiracy theory has been circulated in various groups and pushed by ads that 5G was responsible for the coronavirus outbreak. As a result, there have been arson attacks on phone masts, and Facebook has been under fire. In response, the company has vowed to combat this misinformation.
But what exactly does that 'combat' look like?
In the case of the coronavirus, a Facebook spokesperson told Consumer Reports that a few thousand staff were now set up to work from home to filter ads containing dangerous misinformation. That's a far cry from the 15,000 people Facebook claims are usually occupied with this task.
Instead, the company relies on artificial intelligence systems to sort through submitted ads. And flag suspicious information for human review.
With its massive number of users and multiplying issues of misinformation, Facebook has increasingly relied on AI to take over the task of its moderators. On the frontlines, though, it often comes down to battles between technologies. Only last year the platform launched a campaign to remove propaganda accounts using AI-generated profile images.
Social media have long since become an integral part of modern everyday life, and business' marketing efforts. Advertising, in turn, is social media's main source of revenue.
It is highly unlikely that this will change anytime soon. Or that anyone wants it to.
Given their influence, social media companies have to be held accountable for their actions, even those of algorithms. It should not take unmasking articles and public outcries - over and over - for them to take even baby steps in the right direction.
For them to stop turning a profit off blatant misinformation.