AI-Generated Child Sexual Abuse Material May Overwhelm Tip Line

A new flood of child sexual abuse material created by artificial intelligence is threatening to overwhelm the authorities already held back by antiquated technology and laws, according to a new report released Monday by Stanford University’s Internet Observatory.

Over the past year, new A.I. technologies have made it easier for criminals to create explicit images of children. Now, Stanford researchers are cautioning that the National Center for Missing and Exploited Children, a nonprofit that acts as a central coordinating agency and receives a majority of its funding from the federal government, doesn’t have the resources to fight the rising threat.

The organization’s CyberTipline, created in 1998, is the federal clearing house for all reports on child sexual abuse material, or CSAM, online and is used by law enforcement to investigate crimes. But many of the tips received are incomplete or riddled with inaccuracies. Its small staff has also struggled to keep up with the volume.

“Almost certainly in the years to come, the CyberTipline will be flooded with highly realistic-looking A.I. content, which is going to make it even harder for law enforcement to identify real children who need to be rescued,” said Shelby Grossman, one of the report’s authors.

The National Center for Missing and Exploited Children is on the front lines of a new battle against sexually exploitative images created with A.I., an emerging area of crime still being delineated by lawmakers and law enforcement. Already, amid an epidemic of deepfake A.I.-generated nudes circulating in schools, some lawmakers are taking action to ensure such content is deemed illegal.

See also  The High Line Opened 15 Years Ago in NYC. What Has It Taught Us?

A.I.-generated images of CSAM are illegal if they contain real children or if images of actual children are used to train data, researchers say. But synthetically made ones that do not contain real images could be protected as free speech, according to one of the report’s authors.

Public outrage over the proliferation of online sexual abuse images of children exploded in a recent hearing with the chief executives of Meta, Snap, TikTok, Discord and X, who were excoriated by the lawmakers for not doing enough to protect young children online.

The center for missing and exploited children, which fields tips from individuals and companies like Facebook and Google, has argued for legislation to increase its funding and to give it access to more technology. Stanford researchers said the organization provided access to interviews of employees and its systems for the report to show the vulnerabilities of systems that need updating.

“Over the years, the complexity of reports and the severity of the crimes against children continue to evolve,” the organization said in a statement. “Therefore, leveraging emerging technological solutions into the entire CyberTipline process leads to more children being safeguarded and offenders being held accountable.”

The Stanford researchers found that the organization needed to change the way its tip line worked to ensure that law enforcement could determine which reports involved A.I.-generated content, as well as ensure that companies reporting potential abuse material on their platforms fill out the forms completely.

Fewer than half of all reports made to the CyberTipline were “actionable” in 2022 either because companies reporting the abuse failed to provide sufficient information or because the image in a tip had spread rapidly online and was reported too many times. The tip line has an option to check if the content in the tip is a potential meme, but many don’t use it.

See also  What to Do When Your Child Has Head Lice

On a single day earlier this year, a record one million reports of child sexual abuse material flooded the federal clearinghouse. For weeks, investigators worked to respond to the unusual spike. It turned out many of the reports were related to an image in a meme that people were sharing across platforms to express outrage, not malicious intent. But it still ate up significant investigative resources.

That trend will worsen as A.I.-generated content accelerates, said Alex Stamos, one of the authors on the Stanford report.

“One million identical images is hard enough, one million separate images created by A.I. would break them,” Mr. Stamos said.

The center for missing and exploited children and its contractors are restricted from using cloud computing providers and are required to store images locally in computers. That requirement makes it difficult to build and use the specialized hardware used to create and train A.I. models for their investigations, the researchers found.

The organization doesn’t typically have the technology needed to broadly use facial recognition software to identify victims and offenders. Much of the processing of reports is still manual.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *