Saturday, February 15, 2025
HomeTech NewsIncreased amounts of AI-generated child sexual abuse material are being shared online

Increased amounts of AI-generated child sexual abuse material are being shared online

Date:

Related stories


AI-Generated Child Sexual Abuse Material on the Rise: Report

The rise of artificial intelligence (AI) technology has brought about many advancements and conveniences in our daily lives. However, a recent report published by the U.K.-based Internet Watch Foundation (IWF) sheds light on a disturbing trend – the increase in AI-generated child sexual abuse material (CSAM) posted online.

The report highlights the dark side of AI technology, which enables individuals to create convincing deepfake videos. Deepfakes are misleading digital media created using AI tools, allowing users to manipulate and swap faces in videos. Unfortunately, there is a subculture and marketplace online that revolves around the creation of pornographic deepfakes, including CSAM.

In a recent 30-day review of a dark web forum used to share CSAM, the IWF found a total of 3,512 CSAM images and videos created with artificial intelligence, with most of them being realistic. This marks a 17% increase from a similar review conducted in the fall of 2023. Additionally, the review found that a higher percentage of material posted on the dark web now depicts more extreme or explicit sex acts compared to six months ago.

Dan Sexton, the IWF’s chief technology officer, expressed concern over the increasing realism and severity of AI-generated CSAM. While entirely synthetic videos still look unrealistic, Sexton noted that the technology is rapidly improving, raising the possibility of fully synthetic and realistic content in the future.

The use of AI technology to create CSAM poses challenges for regulators, tech companies, and law enforcement in preventing harm. The reliance on old footage to create new CSAM imagery can cause persistent harm to survivors, as footage of their abuse is repeatedly given fresh life.

The rise in deepfaked abuse material also presents challenges for identifying and prosecuting offenders. Social media platforms and law enforcement agencies rely on scanning images to match a database of established instances of CSAM. However, newly deepfaked material may elude these sensors, making it harder to track pedophiles who trade in such content.

As the technology continues to evolve, it may become more difficult to bring the strictest charges against CSAM traffickers who use AI to create illegal content. The gray area surrounding the creation of CSAM with AI technology raises questions about legal accountability and prosecution.

In light of these concerning developments, it is crucial for individuals to be vigilant and report any instances of child exploitation. If you suspect that you or someone you know is a victim of child exploitation, you can contact the CyberTipline at 1-800-843-5678. Together, we can work towards combating the spread of AI-generated CSAM and protecting vulnerable individuals from harm.

Latest stories

LEAVE A REPLY

Please enter your comment!
Please enter your name here