Meta’s Oversight Board to Investigate Subjective Policy on AI Removal of Sexual Imagery

First Page Design

Site Theme

Meta continues to slowly adapt Facebook and Instagram’s policies to account for the growing harm caused by AI, and this week confronts how it handles particular deepfakes spreading on its platforms.

On Tuesday, Meta’s Oversight Board announced that it would review two cases involving sexualized AI-generated photographs of female celebrities that Meta had treated unequally in the first place to “assess whether Meta’s enforcement policies and practices are effective in addressing particular AI-generated photographs. “”

The board is naming the prominent women whose deepfakes are being investigated in hopes of mitigating “the dangers of worsening harassment,” the board said.

In one case, an Instagram user reported on an AI-created nude symbol created to “look like an Indian public figure” and posted it on an account that “only stores AI-generated Indian women’s symbols. “Meta automatically closed the user’s report “because it wasn’t reviewed within 48 hours. “The user’s attempt to appeal the report was also automatically closed.

Meta eventually abandoned the deepfake until the user appealed to the oversight board. The Meta spokesperson declined Ars’ request for comment on Meta’s delay in cutting the symbol before the board intervened. “As a result of the Board’s variety of this case, Meta decided that its resolution to leave the content online was a mistake and got rid of the post for violating the Community Standards on bullying and harassment,” the Board’s blog read.

One Facebook user had much better luck reporting on a deepfake created to “look like an American public figure” that appeared naked “with a guy touching her chest. “The AI-generated symbol was posted on “a Facebook organization for AI creations” with a caption naming the prominent woman, the board said.

In this case, some other “user had already posted this image,” prompting Meta to step up scrutiny through its safety team, deeming the content “a violation of the Bullying and Harassment Policy” that prohibits “derogatory and sexualized images or drawings. “

Because Meta banned the image, it also added to Meta’s “automated enforcement formula that automatically discovers and removes photographs that have already been known by human reviewers to be in violation of Meta’s rules. “

For this case, the board was asked about the case of a Facebook user who attempted to share the AI symbol after an initial attempt to protest that the resolution to remove it was automatically shut down through Meta.

Meta’s oversight board is reviewing both cases, and over the next two weeks, it will solicit public comment for Meta’s platforms to get up to speed on AI harm mitigation. Facebook and Instagram users, as well as organizations that “can provide valuable information. “They will have until April 30 to submit their comments.

Comments can be shared anonymously and commenters are asked to “avoid naming or otherwise sharing personal data about third parties or speculating about the identities of the other people depicted in the content of those cases. “

In particular, the council asks for comments on “the nature and severity of the harm caused by deepfake pornography,” “in particular” for “women who are public figures. “They also need experts to assess the prevalence of the deepfake challenge in the U. S. and India, where the celebrities involved in those cases reside.

In December, India’s IT Minister Rajeev Chandrasekhar called celebrity deepfakes on social media “dangerous” and “harmful” misinformation that “needs to be addressed across platforms,” the BBC reported. Other countries, including the United States and, more recently, the United States, have proposed legislation criminalizing the creation and sharing of certain AI deepfakes online.

Meta’s board of directors asked public commenters to weigh in on the “most productive policies and enforcement processes that could be most effective” in the fight against deepfakes and to weigh in on how well Meta is doing its homework of enforcing its “derogatory photoshop or sexualized drawings. “”rule. . . This includes getting feedback on the “challenges” users face when reporting deepfakes when Meta relies on “automated systems that close calls within 48 hours if no review has been made. “

Once the board submits its recommendations, Meta will have 60 days to respond. In a blog post, Meta has already shown that it will “implement the board’s decision. “

Join the Ars Orbital Transmission email to receive weekly updates in your inbox. Sign up →

Leave a Comment

Your email address will not be published. Required fields are marked *