For months, a programming error exposed the personal identities of more than 1,000 content moderators working across 22 departments at Facebook.
Those employees worked across 22 departments at the Menlo Park company, reviewing and removing inappropriate content from the social media site — including hate speech and terrorist propaganda, according to The Guardian. The apparent bug caused the personal profiles of Facebook content moderators to appear as notifications in the activity logs of groups and individuals who they had removed from the site. The bug was apparently discovered late last year.
Out of the 1,000 employees who the bug affected, about 40 worked in Facebook’s counter-terrorism department in Dublin, Ireland — six of whom have been flagged as “high priority” victims of the bug after Facebook concluded that terrorist groups likely viewed their profile. The Facebook moderators reportedly noticed something was wrong when they began receiving friend requests from people affiliated with the terrorist groups they were reviewing.
One of those six, an Iraqi-born Irish citizen who wished to remain anonymous, quit his job and fled Dublin in the wake of the incident, only returning from Eastern Europe when he ran out of money. He told The Guardian that he is currently unemployed, suffers from anxiety and is on antidepressants. The unnamed man is also seeking compensation both from Facebook and the contractor who he worked for. Facebook, for its part, offered to install home security systems, provide security escorts and pay for counseling for those six exposed workers — though there’s currently no word on whether the company is offering any financial restitution. In order to prevent any additional harm, Facebook is apparently experimenting with anonymous profiles for moderators, rather than forcing them to use their personal accounts.
The incident comes in the wake of increasing criticism of the platform’s inability to root out terrorism, particularly by European leaders. Both the UK and France have been considering levying fines on tech companies that “fail to take action” against terrorist groups, The Verge reported. Facebook, in turn, announced this week that it wanted to be a “hostile place for terrorists,” outlining some of its counter-terrorism operations in a blog post published Thursday.
“We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliations, and we expect to expand to other terrorist organizations in due course,” the company wrote.