A very embarrassing affair for the social media giant. One of Facebook’s recommendation algorithms asked users if they wanted to see other ‘primate videos’ under a video released by a UK newspaper showing black people, the New York Times revealed on Friday 3 September.
The case began more than a year ago when the Daily Mail, a British tabloid, posted a video on Facebook entitled “White man calls cops against black men at the marina”.
This video only showed people, not monkeys. Except that below, the question: “see more videos on primates?”, Accompanied by the options “Yes / Reject” was displayed on the screen of some users, as shown in a screenshot broadcast on Twitter originally by Darci Groves, a former designer who herself worked for Facebook.
“It’s scandalous,” she commented, calling on her ex-colleagues at Facebook to escalate the matter.
The facts date from June 2020 and emerge today: Following a video showing black men, an algorithm of @Facebook offered to continue viewing “primate” videos. That GAFA help spread the #racism the most abject is unacceptable. https://t.co/f5Si5f08bk pic.twitter.com/KCKSsqkt3H
– Collective VAN (@Collectif_VAN) September 4, 2021
“This is clearly an unacceptable error,” reacted a spokesperson for Facebook, requested by Agence France-Presse.
Facebook has apologized
“We apologize to anyone who saw these insulting recommendations,” she added. The Californian company has deactivated the recommendation tool on this subject “as soon as we noticed what was happening in order to investigate the causes of the problem and prevent it from happening again,” she said. precise. “As we said, although we have improved our artificial intelligence systems, we know that they are not perfect and that we have some progress to make,” she continued.
The case indeed underlines the limits of artificial intelligence (AI) technologies, regularly highlighted by the platform in its efforts to build a personalized feed to each of its nearly 3 billion monthly users.
Mark Zuckerberg’s firm also makes extensive use of AI in content moderation, to identify and block problematic messages and images before they are even seen. But Facebook, like its competitors, is regularly accused of not fighting enough against racism and other forms of hatred and discrimination.
In November 2019, a dozen current or past Facebook employees had also denounced in an article a “hostile culture where all those who are not white” within the American company.
The subject arouses all the more tension as many civil society organizations accuse social networks and their algorithms of contributing to the division of American society, in the context of the demonstrations of the “black Lives Matter” movement (black lives count).