The AI child exploitation crisis is here

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo. Aliquam non leo id magna vulputate dapibus. Curabitur a porta metus. In viverra ipsum nec vehicula pharetra. Proin egestas nulla velit, id faucibus mi ultrices et.



The rapid advancement of artificial intelligence has made it easier than ever for bad actors to create child sexual abuse material, leaving prosecutors and lawmakers struggling to keep up.

Despite efforts by tech companies, law enforcement and activists, offenders consistently exploit system loopholes, open-source AI models and ready-made sexual exploitation platforms to generate imagery of both identifiable and nonexistent children, according to experts and law enforcement officials who spoke with NBC News.

Between January and September of 2025, NCMEC’s CyberTipline — the official online sexual exploitation tip line in the U.S. — received over a million reports related to generative AI, according to Fallon McNulty, the executive director of the center’s exploited children division.

“We often see bad actors at the forefront of leaning into those types of advancements in order to sexually exploit children online,” McNulty said. “The almost indistinguishable nature of the content that is being generated makes it extremely difficult for victim identification efforts.”

Law enforcement officials have found that child sexual abuse material (CSAM) created with generative AI can take on many forms. Sometimes people photograph children in public settings or use already-public photographs, and then use AI systems to turn them into CSAM. Other times, people create entirely new sexually explicit material that involves no real child or recognizable face and is completely AI-generated.

The material is becoming more realistic and harder to differentiate from real images, posing new issues for prosecutors and law enforcement.

Michael Prado, the deputy assistant director of Homeland Security Investigations’ Cyber Crimes Center (C3), said that in the first six months of 2025 alone, reports of child exploitation and generative AI increased by over 600% compared to 2023 and 2024 combined.

“What has, quite frankly, taken us by surprise is how rapidly it has spread,” Prado said.

Now, it’s not uncommon to find AI-generated CSAM mixed in with troves of “traditional” CSAM featuring real children, according to Prado.

“Collectors of this type of material, sometimes they don’t really differentiate. They’re just looking to increase their collections,” he said. “They’re looking to satisfy their perverse sexual interest in children and will use any means to accomplish that.”

Though widely available generative AI is a relatively recent phenomenon, the issue is already appearing in dozens of CSAM prosecutions across the U.S. But the number of cases is a tiny fraction of the number of reports made about CSAM created with AI.

NBC News identified 36 state and federal criminal court cases brought within the last three years related to or mentioning AI-generated CSAM, spanning 22 states. In several cases NBC News reviewed, defendants were allegedly caught with thousands of AI-generated CSAM images. While over half of the cases NBC News reviewed are still active, all closed cases have resulted in guilty verdicts.

The cases appear to represent only a small part of the problem, but Prado said it’s hard for reports to match prosecutable cases.

“The fact of the matter is, half a million reports just in the first six months of the calendar year, that’s not going to result in 500,000 investigations, or certainly not 500,000 arrests,” Prado said. “Let’s say multiple reports pertain to one individual, so it’s hard to track just exactly how prevalent it is amongst the general population.”

How people are creating more AI-generated CSAM

Creators of AI-generated CSAM use a constellation of apps and platforms to generate abusive material, outpacing enforcement efforts.

While public attention has been focused on companies racing to create more powerful models, many smaller companies and websites have sprung up that offer similar features. A review of the legal cases highlights how these smaller platforms can fly under the radar of law enforcement.NBC News found five criminal cases that involved defendants allegedly using small AI platforms like Bashable.art, undress.ai and Faceswapper.AI — which seemingly have less robust platform moderation or were expressly built for making explicit content — to create nude imagery of children. None of the platforms were mentioned as defendants in the cases, and none responded to requests for comment.

An Idaho man allegedly generated over a thousand images of “Apparent Child Pornography” using Bashable.art, according to a federal complaint. Investigators had found that the man was a registered sex offender and had previously been arrested 21 years earlier for sexually abusing a 13-year-old girl.

Using the platform’s “unrestricted mode,” he allegedly prompted the program to create nude images of children under 13, including requests for images of a “large group of girls who are age 11 years old taking a shower” and a “10 year old little nude girl.” The case is still active.

Bashable.art restricts explicit content but gives registered users access to its “unrestricted” mode, which “removes any filters on prompts and models, and allows viewing other shared unrestricted generations,” according to its website. The platform’s website also says it monitors content created in unrestricted mode and may suspend users and report them to NCMEC.

While the defendant who allegedly used Bashable.art is not accused of generating images of known victims, the defendants in cases involving undress.ai and Faceswapper.AI are.

Platforms like undress.ai are part of a network of “nudify” generators designed solely to create explicit deepfakes using images of real people.

In a federal criminal case, a defendant allegedly used the website DeepSukebe, described as an “AI-Leveraged Nudifier” that generates deepfake nude images of women from a clothed photograph, according to a motion to suppress evidence. DeepSukebe did not respond to a request for comment.

According to a Justice Department press release, the man used AI to “digitally alter clothed images of minors making them sexually explicit,” including images of “from a school dance and a photo commemorating the first day of school.” The man, who had also possessed videos and images of children that he secretly recorded, was sentenced to 40 years in prison.Open-source AI models present particularly difficult issues in the effort to fight CSAM, allowing anyone to download, copy, modify and operate them.

Stability AI, a company behind the widely used open-source image model Stable Diffusion, was allegedly used by a Wisconsin man to create CSAM, according to a federal court brief.

Law enforcement alleged that the man had used Stable Diffusion as well as “special add-ons created by other Stable Diffusion users that specialized in producing genitalia,” which allowed him to “generate photo-realistic images of minors,” according to the brief in the ongoing case. A lawyer representing the man declined to comment.

In response to a request for comment on the case, a Stability AI spokesperson said to NBC News that it “is deeply committed to preventing the misuse of AI and has always prohibited the use of our image models and tools for unlawful activity, including all attempts to edit or create CSAM.”Riana Pfefferkorn, a policy fellow at the Stanford University’s Institute for Human-Centered Artificial Intelligence, said the use of open-source platforms has made it difficult for authorities to crack down on AI-generated CSAM.

“When you have an identifiable entity that has a U.S. presence and has a corporate office, you can pin them down,” Pfefferkorn said. But open-source models like Stable Diffusion 1.5, she said, can “float around out there and can keep being trained up locally.”

Larger companies face an uphill battle given their number of users. Major tech companies have submitted thousands of reports of users potentially using their services to create CSAM, according to a report from NCMEC’s CyberTipline — a reporting mechanism where electronic service providers can flag potential CSAM to the center. Platforms are legally mandated to report potential CSAM.

At least one major player appears to be exacerbating the issue. In January, Elon Musk’s X faced global backlash after an update to its AI tool Grok allowed users to create and post nonconsensual deepfakes. The U.K.-based Internet Watch Foundation told NBC News that dark web users were sharing “criminal imagery” of minor girls allegedly created with Grok.

Musk later responded in an X post by saying that he was “not aware of any naked underage images generated by Grok,” and that Grok “will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state.”

Real versus AI

Increasingly real-looking AI-generated content has introduced an issue for CSAM investigators: differentiating between imagery of real people versus fully virtual material.

“The advances in generative AI year over year have made these images become extremely photorealistic,” McNulty said. “I think that is certainly a fear, that law enforcement may be spending days looking for someone who doesn’t exist.”

The distinction can drastically change how a case is prosecuted.In one ongoing federal case, a man who is accused of using AI to generate explicit imagery of children with no known victim has pending charges related to federal obscenity laws, rather than federal CSAM laws, since the allegedly generated children do not physically exist.

Pfefferkorn said she has reviewed over 60 state and federal AI-related CSAM cases and found that obscenity charges have largely been used in cases that do not feature real children. However, she said that most people found to possess AI-generated CSAM possess real CSAM as well. “You can nail them to the wall for that.”

When real victims have been involved, though, prosecutors have argued that charges shouldn’t be adjusted because of the use of AI.

In an Arkansas case, a defendant tried to dismiss charges against him by alleging that the photos of children that were put onto the bodies of adults engaging in sexual activity were computer-generated. The prosecuting attorney for the case said in response that altered images, “would still run afoul of Arkansas’ law prohibiting the production and promotion of sexually explicit conduct involving a child,” according to a brief in the case. The man was found guilty.

For NCMEC, the source of the image is secondary to its impact.

“We at NCMEC consider all of those images the same. We still consider them to be a harm, whether they’re fully AI-generated content or whether they are taken by an offender with access to the child,” said Kathryn Rifenbark, the director of the CyberTipline at NCMEC.

“To the victim, the harm is going to be the same,” she added. “They’re still going to have that impact of that nude picture, whether AI or not, distributed of them online. And since it’s hard for professionals to tell the difference, it’s certainly hard for members of the public to be able to tell the difference, which is why victims are going to be impacted equally.”

The legal landscape

While broader AI regulation remains politically divisive, lawmakers across the aisle are attempting to address AI-generated CSAM, though approaches have varied by state.

According to the watchdog group Public Citizen, 45 states have enacted laws pertaining to intimate AI deepfakes, many of which focus specifically on minors. A deepfake is an AI-generate image, video or audio recording depicting a real person, typically for malicious purposes, and is difficult to distinguish from the real thing. Missouri and New Mexico haven’t passed any such laws yet, and several other states have pending bills.

“My sense is there’s a general interest in passing this type of legislation,” said Ilana Beller, an organizing manager at Public Citizen who created the tracker. “And in states where it hasn’t happened yet, it is not a function of a lack of political will or interest so much as a function of logistics and broader politics.”

She noted that some state laws are specifically tailored toward minors, others toward nonconsensual deepfakes generally, and others outline specific requirements for AI companies.

Beller said states have been proactive about passing legislation on AI-generated CSAM, but that targeting AI companies can be a “trickier area to legislate in” because it can mean that the smaller, unregulated open-source AI platforms are let off the hook.

Four states have passed or introduced legislation that specifically targets platforms, but all already have pre-existing legislation that covers AI deepfakes generally.

NBC News identified five cases pertaining to AI-generated CSAM in Missouri, Alaska and Ohio that have no specific legal framework to combat the issue. Still, two of the cases, both of which involved known victims, resulted in guilty verdicts related to the possession of child pornography. The other cases are ongoing.

Federal efforts to address AI-generated CSAM are continuing. In May 2025, President Donald Trump signed the TAKE IT DOWN Act, which made the creation of nonconsensual deepfakes a federal crime and requires platforms to take down imagery 48 hours after it is reported. On Dec. 16, the Enhancing Necessary Federal Offenses Regarding Child Exploitation (ENFORCE) Act passed the Senate unanimously.

It would allow for the creators and distributors of AI-generated CSAM to be prosecuted to the same degree as those who create other forms of CSAM. The legislation is now waiting to be reviewed by the House of Representatives.

Beller said the federal law is a step in the right direction, but that state legislation is crucial for civil cases, and for handling a rising caseload. Beller pointed to a New Hampshire law that both prohibits “certain uses of deepfakes” and creates “a private claim of action.”

“It is really important that state and local prosecutors are empowered to address these issues in the courts as well,” Beller said. “The number of cases related to nonconsensual, intimate deepfakes would just be too much for only federal prosecutors. They would only be able to get to a small fraction of the total number of cases.”

Prado said that if the technology continues to evolve at the pace that it has, it will continue to be difficult for lawmakers to find the right approach.

“What I see is the states and the federal government really taking action in response to this problem,” he said. “But as we are well aware, the state legislatures and Congress, there’s often a lag between laws, because it does take time to formulate laws and get them on the books and get people trained to enforce them. It’s hard to keep up with the rapidly evolving nature of technology and generative AI.”



Source link

Tags :

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent News

About Us

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, mattis, pulvinar dapibus leo.

Top categories

Tags

Blazethemes @2024. All Rights Reserved.