Despite efforts Facebook still accused of tolerating misinformation and hate speech
Earlier this week, social media network Facebook announced that it had gone ahead with plans to remove more than a billion fake accounts. The account deletion took place between October and December of the previous year.
While this update is welcome and should be continued, it also raises a key question about Facebook’s account creation filters and its ability to monitor account credibility on its platform. How was it possible for such an inordinate amount of fake accounts to have existed, and was there an element of tolerance on behalf of the platform?
Facebook has indeed acknowledged that misinformation and the spread of fake news is an issue and that it has upwards of 35,000 people working on curtailing this socially corrosive phenomenon on the network.
In terms of posts related to the coronavirus pandemic specifically, Facebook stated that more than 12 million pieces of content were removed, particularly after a number of health experts raised this issue with the company, specifically through the flagging of content on Covid-19 and vaccines as misinformation.
The timing of Facebook’s update on the actions it has taken is far from random. The social media network will soon be grilled during an inspection by the US House Committee on Energy and Commerce as the new administration probes the effect that social media companies have on public discourse through the dissemination of misinformation.
Despite Facebook’s announcement on what it has done in regards to fake news, the social network was still the recipient of a lawsuit from international NGO Reporters Without Borders (RSF), with the lawsuit claiming that Facebook willingly tolerates the spreading of misinformation and hate speech on its platform.
“Using expert analyses, personal testimony and statements from former Facebook employees, RSF’s lawsuit demonstrates that the California-based company’s undertakings to its consumers are largely mendacious, and that it allows disinformation and hate speech to flourish on its network (hatred in general and hatred against journalists), contrary to the claims made in its terms of service and through its ads,” the RSF statement said.
The lawsuit was filed in France, due to the suitability of the legal framework there to the type of litigation the RSF has brought forward. Moreover, the RSF cites the large number of Facebook users in the country, making the location of the court battle more pertinent, since there is direct interest in the outcome of the lawsuit.
“First Draft, a non-profit organisation founded in 2015 to combat online disinformation, recently identified Facebook as ‘the hub of vaccine conspiracy theories’ in French-speaking communities,” the RSF statement explained.
The First Draft study that the RSF cited explains that it has combed through 14,394,320 posts across Facebook Pages, Facebook Groups, Instagram and Twitter, with the latter platform being the only one out of the four not owned by Facebook. It then delved deeper to identify the posts which receive any level of serious engagement for further analysis.
This allowed First Draft to locate two dominant narratives around the online discussions around Covid-19 and vaccine development. Firstly, it showed that people’s talk around vaccines is centred around “political and economic motives”, exhibiting the high level of cynicism and distrust, followed by discussion centred on the “safety, efficacy and necessity” of vaccines, a topic which is somewhat more multifaceted.
Beyond the thematic analysis provided above, the study also discovered that discussion is shaped by linguistic traits, that visual content tends to overpower or at the very least drive the debate, that unverified pages on Facebook and Instagram are the driving force behind the discussion and that conspiracy theories and other such content is extremely potent and has an immense and sadly wide-ranging effect on social media.
“Conspiracy theories about vaccines in general and the Covid-19 vaccine specifically play an outsized role on social media,” the study explained.
“And these conspiracy theories were not limited to fringe groups. They resonated with the ‘Yellow Vest’ movement, libertarians, New Age groups, highly popular anti-government groups and more conventional audiences, with key terms such as ‘microchipping’ and ‘deep state’ becoming increasingly popular,” First Draft added, showing how conspiracy theories can encompass a wide range of groups and ideologies.
Facebook may indeed continue to ramp up its measures against misinformation on its platform but not because it has discovered a new sense of moral purpose. True and impact action will be necessary to fend off outside pressure from large advertisers who pump money into the platform for marketing reasons.
Over the past year or so, upwards of 100 companies publicly announced their temporary suspension of any advertising campaigns on Facebook. Among these are global behemoths such as Coca-Cola, Unilever and Starbucks. This is the result of a campaign called Stop Hate For Profit.
Despite the obvious link between financial repercussions and Facebook taking action, the company denies the link between any recent developments and financial concerns.
“If people were sharing information that could cause real-world harm, we will take that down. We’ve done that in hundreds of thousands of cases,” said Facebook’s vice president for Northern Europe Steve Hatch.
In the week before this one, a study published in Nature claimed that “subtly shifting attention to accuracy increases the quality of news that people subsequently share”.
The study explained that though most people do indeed value accuracy of information and would like to share highly accurate content, there is an element of unwitting sharing of misinformation because their attention can be redirected to other factors beyond that of accuracy. This finding allows for an action plan with a clear objective on behalf of both content creators as well as social media companies.
In short, when the study made its subjects consciously think of accuracy as one of the factors when judging which headline they would be most likely to share, it actually increased the truthfulness of the articles and other content they ended up sharing.
“We provide evidence that shifting attention to accuracy is the mechanism behind this effect by showing that the treatment condition leads to the largest reduction in the sharing of headlines that participants are likely to deem to be the most inaccurate,” the study explains.
“The most obviously inaccurate headlines are the ones that the accuracy salience treatment most effectively discourages people from sharing.”