Logo

Logo

Facebook to engage external auditors to validate its content review report

The report showed that fake accounts actioned declined from 1.7 billion accounts in March quarter, to 1.5 billion in June quarter.

Facebook to engage external auditors to validate its content review report

In the sixth edition of its Community Standards Enforcement Report, the company noted that there was an impact of COVID-19 on its content moderation. (Photo: iStock)

Social media giant Facebook has said it will engage with external auditors to conduct an independent audit of its metrics and validate the numbers published in its Community Standards Enforcement Report.

The US-based company first began sharing metrics on how well it enforces its content policies in May 2018, to track its work across six types of content that violate its Community Standards, which define what is and isn’t allowed on Facebook and Instagram.

Currently, the company reports across 12 areas on Facebook and 10 on Instagram, including bullying and harassment, hate speech, dangerous organisations: terrorism and organised hate, and violent and graphic content.

Advertisement

Facebook Technical Program Manager, Integrity, Vishwanath Sarang said over the past year, the company has been working with auditors internally to assess how the metrics it reports can be audited most effectively.

“This week, we are issuing a Request For Proposal (RFP) to external auditors to conduct an independent audit of these metrics. We hope to conduct this audit starting in 2021 and have the auditors publish their assessments once completed,” he said in a blogpost.

Emphasising that the credibility of its systems should be earned and not assumed, Sarang said the company believes that “independent audits and assessments are crucial to hold us accountable and help us do better”.

“…transparency is only helpful if the information we share is useful and accurate. In the context of the Community Standards Enforcement Report, that means the metrics we report are based on sound methodology and accurately reflect what’s happening on our platform,” Sarang said.

In the sixth edition of its Community Standards Enforcement Report, the company noted that there was an impact of COVID-19 on its content moderation.

“While our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,” Guy Rosen, VP Integrity at Facebook, said.

Rosen said the company wants people to be confident that the numbers it reports around harmful content are accurate.

“…so we will undergo an independent, third-party audit, starting in 2021, to validate the numbers we publish in our Community Standards Enforcement Report,” he said.

Rosen said the proactive detection rate for hate speech on Facebook increased from 89 per cent to 95 per cent, and in turn, the amount of content it took action on increased from 9.6 million in the first quarter of 2020, to 22.5 million in the second quarter.

“This is because we expanded some of our automation technology in Spanish, Arabic and Indonesian and made improvements to our English detection technology in Q1. In Q2, improvements to our automation capabilities helped us take action on more content in English, Spanish and Burmese,” he said.

On Instagram, the proactive detection rate for hate speech increased from 45 per cent to 84 per cent and the amount of content on which action was taken increased from 808,900 in March quarter to 3.3 million in June quarter.

“Another area where we saw improvements due to our technology was terrorism content. On Facebook, the amount of content we took action on increased from 6.3 million in Q1, to 8.7 million in Q2.

“And thanks to both improvements in our technology and the return of some content reviewers, we saw increases in the amount of content we took action on connected to organised hate on Instagram and bullying and harassment on both Facebook and Instagram,” Rosen said.

He further said: “Since October 2019, we’ve conducted 14 strategic network disruptions to remove 23 different banned organisations, over half of which supported white supremacy”.

The report showed that fake accounts actioned declined from 1.7 billion accounts in March quarter, to 1.5 billion in June quarter.

“We continue to improve our ability to detect and block attempts to create fake accounts. We estimate that our detection systems help us prevent millions of attempts to create fake accounts every day.

“When we block more attempts, there are fewer fake accounts for us to disable, which has led to a general decline in accounts actioned since Q1 2019,” it added.

The report said it estimates that fake accounts represented approximately 5 per cent of worldwide monthly active users (MAU) on Facebook during the June quarter.

Advertisement