ChatGPT maker OpenAI explores whether users can ‘responsibly’ make AI porn : NPR
US Air Force Employee ‘Secretly Took Photos of Kids to Make AI Child Porn Images’
While AI porn represents a technological marvel, its implications for society are deeply concerning and multifaceted. While several federal bills have been introduced to explicitly criminalize the nonconsensual distribution of intimate imagery or deepfake porn in particular, Gibson says she doesn’t have great hopes of those bills becoming the law of the land. Liu, who was already working in tech, founded Alecto AI, a startup named after a Greek goddess of vengeance.
But it remains to be seen whether the company could effectively moderate explicit generation to prevent bad actors from using the tools. Microsoft made changes to one of its generative AI tools after 404 Media reported that it had been used to create explicit images of Taylor Swift that were distributed on the social platform X. An October 2023 report from the UK-based watchdog, Internet Watch Foundation, exposes how artificial intelligence can be used for sexual abuse through the use of AI to create child sexual abuse imagery (AI CSAM). Policies that regulate the production or dissemination of explicit fake images would have to also be compliant with a bevy of free speech laws, she said.
AI security firm Vizgard lands £1.5m funding
David Greenbaum, who formerly led Searchlight Pictures, will now step into a newly-created position of president of Live Action and 20th Century Studios. After all, the live action remakes of “Aladdin,” “Beauty and the Beast,” and “The Lion King” all topped $1 billion in box office sales worldwide. But 2023 was a tough year for Disney — the live-action remake of “The Little Mermaid” grossed half a billion worldwide, and while it was far from a flop, it didn’t see the same success as previous live-action brethren. “The Haunted Mansion” was widely panned by critics (though audiences seemed to love it, giving it an 84 percent on Rotten Tomatoes) and ultimately didn’t perform theatrically to the company’s expectations.
But before AI, there were legal questions about cartoons or non-realistic pornographic images. U.S. lawmakers from both sides of the political aisle are looking to hold perpetrators accountable through legislation that would criminalize the production of these fake images and allow victims to sue for damage. The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act was introduced by Senate Majority Whip Dick Durbin (D-IL), joined by Sens. Lindsey Graham (R-SC), Amy Klobuchar (D-MN), and Josh Hawley (R-MO). States should continue pushing legislation to support the victims of AI-generated deepfakes, but policies should also be updated to make it easier to identify and prosecute bad actors, De Mooy said. Franks wants to see legislation on deepfake-AI porn include a criminal component as well as a civil component, because, she says, people are more often fearful of going to jail versus fearful of getting sued about something so abstract and misunderstood. She says she thinks a lot of people have learned not to believe everything they read on the internet, but they don’t have the same mental guard against video and audio.
Sora AI video generator: What you need to know
Voters need disclosures and disclaimers when AI is being used to help them determine the difference between fact and fiction, said the bill’s sponsor, Republican Rep. Adam Neylon. He said the measure was an “important first step that gives clarity to voters,” but more action will be needed as the technology evolves. The Assembly approved a bipartisan measure to require political candidates and groups to include disclaimers in ads that use AI technology. The most convincing images are visually indistinguishable from real CSAM, even for trained analysts, IWF officials said in the report. Deepfake images, intentionally or not, often shame or humiliate the victim, making it a public health issue, De Mooy said.
“We are continuing to add further logic and mechanisms to better address methods that have been used to sidestep [our] guardrails,” a Leonardo Ai spokesperson said. “Many of these changes are already live and more are coming over the next couple of weeks.” New South Wales Education Minister Prue Car called the incident “abhorrent” during a Thursday press conference.
The app she’s building lets users deploy facial recognition to check for wrongful use of their own image across the major social media platforms (she’s not considering partnerships with porn platforms). Liu aims to partner with the social media platforms so her app can also enable immediate removal of offending content. “If you can’t remove the content, you’re just showing people really distressing images and creating more stress,” she says. “Our generation is facing its own Oppenheimer moment,” says Lee, CEO of the Australia-based startup That’sMyFace. But her long-term goal is to create a tool that any woman can use to scan the entire Internet for deepfake images or videos bearing her own face. The change is part of a suite of moves aimed at reducing the incidence of violence against women and addressing the role that technology, including social media, plays in spreading and normalising violent, degrading and misogynistic imagery and ideas.
It’s very hard not to see this as a decriminalization of possession and distribution of child pornography by Germany. Now, California is pushing for a new bill, soon to be a law, that will protect children further from AI-generated deepfake porn, closing the loophole and penalizing the violators. It first started with celebrities getting AI deep faked by unknown bad actors using the tech, with one of the most infamous cases with Taylor Swift seeing her explicit fake images trending online. In Texas, law prohibit anyone from possessing child porn and a 2023 amendment made it so modified images are included in the legislation.
- Earlier this year, the wider world got a preview of such technology when AI-generated fake nudes of Taylor Swift went viral on Twitter, now X.
- In 2023, the CyberTipline, run by the National Center for Missing and Exploited Children, received 4,700 reports of child sexual abuse material involving generative AI.
- OpenAI’s models have been trained on vast amounts of public web content, some undoubtedly pornographic in nature.
- “A lot of my work has to do with chain breaking, the cycle breaking, and this, to me, is a really, really, really important cycle to break,” she says.
She’d already been working on the deepfake-AI legislation when it happened, but she says the Swift incident helped accelerate the timeline on the bipartisan, bicameral legislation. Sens. Dick Durbin (D-Ill.) and Lindsey Graham (R-S.C.) are leading the Senate version of the bill, while Ocasio-Cortez leads the House version. It’s called the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act of 2024. The legislation amends the Violence Against Women Act so that people can sue those who produce, distribute, or receive the deepfake pornography, if they “knew or recklessly disregarded” that the victim did not consent to those images. The child safety ecosystem is already overtaxed, with millions of files of suspected CSAM reported to hotlines annually. Anything that adds to that torrent of content—especially photorealistic abuse material—makes it more difficult to find children that are actively in harm’s way.
Companies that provide generative AI resources should be required to include a tool or mechanism that can identify or label content as being manufactured, she said. Otherwise, it’s hard for victims to prove that a picture of them has actually been manipulated. Plus for law enforcement officials, “it’s hard, if not impossible, to track or identify the creators of this kind of content [because of] the anonymous nature of the way that they’re created,” De Mooy said. She and her boyfriend called the local police, who didn’t provide much assistance. Taylor played phone tag with a detective who told her, “I really have to examine these profiles,” which creeped her out.
Will performers be able to coexist with AI, or will it put them out of business? Joining us to discuss are Tatum Hunter, consumer technology reporter for the Washington Post, Steve Lightspeed, CEO of porn.ai, and Lexi Luna, adult entertainment performer. Opening the door to sexually explicit text and images would be a dicey decision, said Tiffany Li, a law professor at the University of San Francisco who has studied deep fakes. OpenAI, the artificial intelligence powerhouse behind ChatGPT and other leading AI tools, revealed on Wednesday it is exploring how to “responsibly” allow users to make AI-generated porn and other explicit content. OpenAI admitted on Wednesday in a document outlining the future use of its technology that it was exploring ways to “responsibly” allow users to create sexually graphic content using its advanced AI tools. An alarming case of child porn deepfakes spread across communities and schools are seen in many regions, and this includes Canada, the United States, the United Kingdom, and more.
Eventually, she says, he told her that this was technically legal, and whoever did it had a right to. And as Clarke points out, the harm is no longer just focused on marginalized communities. It’s become widespread, affecting teens and college students across the country. It will further blur the lines between what’s real and what’s not — as politics become more and more polarized.
YouTube blocked over 200 Russian channels: Media
Meanwhile, in November, a scandal, which saw a teenage boy create 50 AI nude photos of his female classmates, forced the closure of a private school. Federal authorities became aware of Anderegg’s actions when they received a CyberTip from the National Center for Missing and Exploited Children (NCMEC), prosecutors said. Instagram reported Anderegg’s account to NCMEC for sharing the images, according to the DOJ’s release. According to an FBI report, in the U.S. alone, FIN7 stole more than 15 million customers’ card data from over 6,500 point-of-sale terminals between 2016 and 2017. It’s believed to be tied to Russia based on the fact that it recruits Russian speakers and targets mostly U.S. and European corporate users as a way to infiltrate their work systems. Likewise, Russia itself has been largely uncooperative in helping catch the perpetrators, according to law enforcement officials.
- The movie version of Wicked is being released in two parts, with the second part slated for release at Thanksgiving 2025.
- Part of the lengthy Model Spec document discloses that the company is exploring a leap into porn and other explicit content.
- IWF’s October report found AI CSAM has increased the potential for the re-victimization of known child sexual abuse victims, as well as for the victimization of famous children and children known to perpetrators.
- The series will address algorithmic bias in AI systems and will educate the public on misinformation, particularly in regard to disinformation campaigns against Black voters.
- Hayes said the bill adding further penalties for AI-generated content depicting minors in explicit images or videos was important, as the FBI reports this sort of content often involves depictions minors.
- Currently, it is not illegal to create a deepfake AI-generated or digitally altered pornographic image.
Critics say the fact OpenAI is even considering going down this path makes a mockery of its mission statement to produce safe and beneficial AI. If the amendment is passed, creators of deepfake porn could face an “unlimited fine” and possible jail time if the image is widely circulated. Lawmakers are introducing an amendment to the country’s Criminal Justice Bill that will force adults who make deepfake porn to “face the consequences of their actions,” according to a release from the UK’s Ministry of Justice. “As somebody who grew up during the age of social media, I know firsthand some of the harms that can have on the mental health of children in our state,” Scheetz said.
Anderegg used Instagram direct messages to send the teenager several GenAI images of minors displaying their genitals, the DOJ said. Several of the images showed nude or partially clothed minors touching their genitals or being sexually abused by men, according to the DOJ. Evidence seized from Anderegg’s electronic devices revealed that he generated the images using “specific (and) sexually explicit text prompts related to minors,” which he kept stored on his computer, prosecutors said. “Right now there are 49 states, plus D.C., that have legislation against nonconsensual distribution of intimate imagery,” Gibson says. And some are significantly better than others.” Gibson notes that almost all of the laws require proof that the perpetrator acted with intent to harass or intimidate the victim, which can be very hard to prove. The only other laws concerning AI in Texas are SB 751, which makes political deepfakes illegal and SB 1361 which outlaws pornography deepfakes targeting adults.
The police are working with both the eSafety Commissioner and the Department of Education to address the incident. Legislatures in Texas, North Dakota, West Virginia and Puerto Rico have created advisory bodies to study and monitor AI systems their state agencies are using. Louisiana formed a new security committee to study AI’s impact on state operations, procurement and policy.
One of the most alarming aspects of AI porn is its capacity to create explicit content without the consent of the individuals depicted. This is particularly harmful when celebrities, public figures, or private individuals find their likeness used in deepfake pornographic content. The non-consensual nature of these creations constitutes a severe violation of privacy and personal autonomy. But what Hayman didn’t expect was that someone would pull a photo from her Instagram account and use artificial intelligence to morph her face into child sexual abuse images.
OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn – WIRED
OpenAI Is ‘Exploring’ How to Responsibly Generate AI Porn.
Posted: Wed, 08 May 2024 07:00:00 GMT [source]
Meta AI held firm at first, providing a generic refusal and directing users to call a helpline if they were in danger. Kershaw said he believed “a tsunami of AI-generated abuse material” was coming. Outlawing that on its own is not within the commonwealth’s jurisdiction and would require changes to state and territory law, with moves under way in some jurisdictions.
With the upcoming Hollywood movie version of Wicked ready to carpet-bomb America’s kids with marketing and merchandising over this coming holiday season, the toy giant Mattel released its first batch of Wicked fashion dolls to retail shelves. And the packaging for the dolls, very unfortunately, presents a link to the highly NSFW pornographic website Wicked.com instead of the correct WickedMovie.com website. The South Carolina Daily Gazette is a nonprofit news site providing nonpartisan reporting and thoughtful commentary.