Facebook accidentally used a rape threat to advertise Instagram, just a week after it allowed advertisers to target topics such as "Jew haters" and other anti-Semitic categories.
The offensive Instagram advertisement is the latest example of an algorithm gone wrong, and to make matters worse, the Facebook-owned company did not respond well to the issue.
Rape Threat Used As Instagram Ad
Olivia Solon, a reporter for The Guardian, uploaded an image on Instagram about a year ago of a rape threat that she received in order to show the hate messages that she receives as a journalist.
"I will rape you before I kill you, you filthy whore!" the Instagram image read. About a year later, Solon discovered that Facebook used the photo of the rape threat to advertise Instagram on the social network.
Instagram is using one of my most "engaging" posts to advertise its service to others on Facebook pic.twitter.com/lyEBHQXMfa
— Olivia Solon (@oliviasolon) September 21, 2017
Instagram decided to use the screenshot of the rape threat as a way to advertise the picture-sharing platform to Solon's sister. The original Instagram image received several comments and a handful of likes, but apparently, that was enough to tag it as an "engaging" post, which is what Facebook looks at to choose the images that will be used in ads for Instagram.
Making matters worse is how Instagram responded to the horrible mistake.
"We are sorry this happened — it's not the experience we want someone to have," a spokesperson said. "This notification post was surfaced as part of an effort to encourage engagement on Instagram. Posts are generally received by a small percentage of a person's Facebook friends."
Rape threats being described as an experience that Instagram does not want its users to have is a gross understatement, and highlights the danger of relying too much on algorithms.
The Danger Of Algorithms
The usage of Solon's image of a rape threat as an Instagram advertisement shows what can happen when a company such as Facebook depends too much on algorithms to carry out tasks. Human oversight to filter the images that the algorithms select for such advertisements would have prevented such a mistake from happening.
Facebook recently came under fire for the "Jew haters" targets for advertisers and for selling ads worth $100,000 to a Russian "troll farm" during the 2016 U.S. presidential election, and this latest gaffe digs a deeper hole for the social network. As Facebook implements new tools to prevent users from misusing its platform, it appears that the social network itself is the biggest offender.