A mother and her 14-year-old daughter are advocating for better protections for victims after AI-generated nude images of the teen and other female classmates were circulated at a high school in New Jersey.
Meanwhile, on the other side of the country, the government is investigating an incident involving a teenager who allegedly used synthetic intelligence to create and distribute photographs of other academics (also teenage females) attending one of the top schools in the Seattle suburb.
Once again, these disturbing cases have put a spotlight on AI-generated content that greatly harms women and girls and is unfolding online at an unprecedented rate. According to research by independent researcher Genevieve Oh shared with the Associated Press, more than 143,000 new deepfake videos have been uploaded this year, more than two years combined.
Desperate for solutions, families are pushing lawmakers to implement robust safeguards for victims whose images are manipulated using new AI models, or the plethora of apps and websites that openly advertise their services. Advocates and some legal experts are also calling for federal regulation that can provide uniform protections across the country and send a strong message to current and would-be perpetrators.
“We’re fighting for our children,” said Dorota Mani, whose daughter was one of the victims in Westfield, a New Jersey suburb outside New York City. “They’re not Republicans, they’re not Democrats. They don’t care. “They just need to enjoy themselves and be safe.
The challenge of deepfakes is rarely very new, yet experts say it is getting worse as the generation producing them becomes more available and less difficult to use. This year, researchers sounded the alarm about the explosion of AI-generated child sexual abuse. fabrics that use representations of real patients or virtual characters. In June, the FBI warned that it was continuing to receive reports of patients, both minors and adults, whose photographs or videos were being used to create particular content shared online.
Several states have passed their own legislation over the years to try to combat the problem, but its scope varies. Texas, Minnesota and New York passed a law criminalizing non-consensual deepfake pornography this year, joining Virginia, Georgia and Hawaii, which had already passed existing legislation. Some states, such as California and Illinois, have only granted victims the ability to sue perpetrators in civil court, which New York and Minnesota also allow.
A few other states are passing their own legislation, including New Jersey, where a bill is recently in the works to ban ultrafake pornography and impose consequences (either imprisonment, fines, or both) on those who spread it.
State Sen. Kristin Corrado, a Republican who introduced the bill earlier this year, said she became concerned after reading an article about others seeking to evade revenge law by using their ex-partner’s symbol to generate deepfake.
“We had a feeling something was going to happen,” Corrado said.
The bill has languished for a few months, but there’s a good chance it might pass, she said, especially with the spotlight that’s been put on the issue because of Westfield.
The Westfield event took place this summer and was reported to the high school on Oct. 20, Westfield High School spokeswoman Mary Ann McGann said in a statement. McGann did not provide details about how the AI-generated photos were spread. However, Mani, the mother of one of the girls, said she received a call from the school informing her that her nude photographs had been created using the faces of some female students and then spread among a group of friends on the social media app. Snapchat.
The school has shown some disciplinary action, citing confidentiality on issues involving students. Westfield police and the Union County District Attorney’s Office, either of which has been confirmed, responded to requests for comment.
No details have been released about the incident in Washington state, which occurred in October and is under police investigation.
Paula Schwan, leader of the Issaquah Police Department, said she had received several search warrants and noted that their appearance could be “subject to change” as the investigation continues. When contacted for comment, the Issaquah School District said it could simply not discuss the main points of the investigation, but said any form of bullying, harassment or mistreatment among academics is “completely unacceptable. “
If the government prosecutes the New Jersey incident, the state’s existing law prohibiting the sexual exploitation of minors may already be enforced, said Mary Anne Franks, a law professor at George Washington University and director of the Cyber Civil Rights Initiative, an organization aimed at fighting online abuse. But those protections aren’t greater for adults who might find themselves in a similar situation, he said.
The best fix, Franks said, would come from a federal law that can provide consistent protections nationwide and penalize dubious organizations profiting from products and apps that easily allow anyone to make deepfakes. She said that might also send a strong signal to minors who might create images of other kids impulsively.
President Biden signed an executive order in October that, among other things, called for barring the use of generative AI to produce child sexual abuse material or non-consensual “intimate imagery of real individuals.” The order also directs the federal government to issue guidance to label and watermark AI-generated content to help differentiate between authentic and material made by software.
Citing the Westfield incident, U. S. Rep. Tom Kean Jr. , a Republican who represents the city, on Monday introduced a bill that would require developers to disclose AI-generated content. Among other efforts, the federal bill introduced through U. S. Rep. Joe Morelle, a New York Democrat, would make the percentage of fake pornographic photos online illegal. But that hasn’t progressed for months due to gridlock in Congress.
Some call for caution — adding the American Civil Liberties Union, the Electronic Frontier Foundation and The Media Coalition, an organization that works for industry teams representing publishers, movie studios and others — saying careful attention is needed to proposals that may conflict with the First Amendment.
“Some considerations about abusive deepfakes can be addressed within existing cyberbullying laws,” said Joe Johnson, an attorney with the ACLU of New Jersey. “Whether it’s federal or state, you want to have substantial verbal exchange and input from stakeholders to make sure that any bill isn’t too broad and addresses the issue that’s being reported. “
Mani said her daughter has created a website and set up a charity aiming to help AI victims. The two have also been in talks with state lawmakers pushing the New Jersey bill and are planning a trip to Washington to advocate for more protections.
“Not every child, boy or girl, will have the formula they need to address this problem,” Mani said. “And they may not see the light at the end of the tunnel. “
Subscribe to accessSite Map
Follow
MORE FROM THE L. A. TIMES