SAN FRANCISCO: Facebook Inc said on Wednesday that company moderators during the last quarter removed 8.7 million user images of child nudity with the help of previously undisclosed software that automatically flags such photos.
The machine learning tool rolled out over the last year identifies images that contain both nudity and a child, allowing increased enforcement of Facebook’s ban on photos that show minors in a sexualised context.
A similar system also disclosed on Wednesday catches users engaged in “grooming,” or befriending minors for sexual exploitation.
Facebook’s global head of safety Antigone Davis said in an interview that the “machine helps us prioritise” and “more efficiently queue” problematic content for the company’s trained team of reviewers.
The company is exploring applying the same technology to its Instagram app.
Under pressure from regulators and lawmakers, Facebook has vowed to speed up the removal of extremist and illicit material. Machine learning programs that sift through the billions of pieces of content users post each day are essential to its plan.
Machine learning is imperfect, and news agencies and advertisers are among those that have complained this year about Facebook’s automated systems wrongly blocking their posts.
Davis said the child safety systems would make mistakes but users could appeal.
“We’d rather err on the side of caution with children,” she said.