Efforts to keep Facebook products from being used for hate, misinformation have trailed its growth
Facebook employees have warned for years that as the company raced to become a global service it was failing to police abusive content in countries where such speech was likely to cause the most harm, according to interviews with five former employees and internal company documents viewed by Reuters.
For over a decade, Facebook has pushed to become the world's dominant online platform. It currently operates in more than 190 countries and boasts more than 2.8 billion monthly users who post content in more than 160 languages.
But its efforts to prevent its products from becoming conduits for hate speech, inflammatory rhetoric and misinformation — some of which has been blamed for inciting violence — have not kept pace with its global expansion.
Internal company documents viewed by Reuters show Facebook has known that it hasn't hired enough workers who possess both the language skills and knowledge of local events needed to identify objectionable posts from users in a number of developing countries.
The documents also showed that the artificial intelligence systems Facebook employs to root out such content frequently aren't up to the task, and that the company hasn't made it easy for its global users themselves to flag posts that violate the site's rules.
Those shortcomings, employees warned in the documents, could limit the company's ability to make good on its promise to block hate speech and other rule-breaking posts in places from Afghanistan to Yemen.
- Whistleblower testifies Facebook chooses profit over safety, calls for 'congressional action' (new window)
In a review posted to Facebook's internal message board last year regarding ways the company identifies abuses on its site, one employee reported
significant gaps in certain countries at risk of real-world violence, especially Myanmar and Ethiopia.
The documents are among a cache of disclosures made to the U.S. Securities and Exchange Commission and Congress by Facebook whistleblower Frances Haugen, a former Facebook product manager who left the company in May.
Reuters was among a group of news organizations able to view the documents, which include presentations, reports and posts shared on the company's internal message board. Their existence was first reported by The Wall Street Journal.
WATCH | Former Facebook data scientist asks Congress to intervene in social media company's actions:
We know these challenges are real and we are proud of the work we've done to date, Jones said.
Still, the cache of internal Facebook documents offers detailed snapshots of how employees in recent years have sounded alarms about problems with the company's tools — both human and technological — aimed at rooting out or blocking speech that violated its own standards.
The material expands upon Reuters's previous reporting on Myanmar and other countries, where the world's largest social network has failed repeatedly to protect users from problems on its own platform and has struggled to monitor content across languages.
Among the weaknesses cited were a lack of screening algorithms for languages used in some of the countries Facebook has deemed most
at-risk for potential real-world harm and violence stemming from abuses on its site.
- Ottawa urged to crack down on Facebook after bombshell whistleblower testimony before U.S. Senate (new window)
The company designates countries at-risk based on variables including unrest, ethnic violence, the number of users and existing laws, two former staffers told Reuters. The system aims to steer resources to places where abuses on its site could have the most severe impact, the people said.
Facebook reviews and prioritizes these countries every six months in line with United Nations guidelines aimed at helping companies prevent and remedy human rights abuses in their business operations, spokesperson Jones said.
In 2018, United Nations experts investigating a brutal campaign of killings and expulsions against Myanmar's Rohingya Muslim minority said Facebook was widely used to spread hate speech toward them. That prompted the company to increase its staffing in vulnerable countries, a former employee told Reuters. Facebook has said it should have done more to prevent the platform from being used to incite offline violence in the country.
WATCH | Can Facebook be fixed?
Ashraf Zeitoon, Facebook's former head of policy for the Middle East and North Africa, who left in 2017, said the company's approach to global growth has been
colonial, focused on monetization without safety measures.
More than 90 per cent of Facebook's monthly active users are outside the United States or Canada.
Facebook has long touted the importance of its artificial intelligence (AI) systems, in combination with human review, as a way of tackling objectionable and dangerous content on its platforms. Machine-learning systems can detect such content with varying levels of accuracy.
But languages spoken outside the United States, Canada and Europe have been a stumbling block for Facebook's automated content moderation, the documents provided to the government by Haugen show. The company lacks AI systems to detect abusive posts in a number of languages used on its platform. In 2020, for example, the company did not have screening algorithms known as
classifiers to find misinformation in Burmese, the language of Myanmar, or hate speech in the Ethiopian languages of Oromo or Amharic, a document showed.
- Facebook pays $4.75M US fine plus back pay to settle suit alleging it favoured foreign workers (new window)
These gaps can allow abusive posts to proliferate in the countries where Facebook itself has determined the risk of real-world harm is high.
Reuters this month found posts in Amharic, one of Ethiopia's most common languages, referring to different ethnic groups as the enemy and issuing death threats. A nearly year-long conflict in the country between the Ethiopian government and rebel forces in the Tigray region has killed thousands of people and displaced more than two million.
Facebook spokesperson Jones said the company now has proactive detection technology to detect hate speech in Oromo and Amharic and has hired more people with
language, country and topic expertise, including people who have worked in Myanmar and Ethiopia.