Skip to main content
OpenAI

Introducing the Child Safety Blueprint

A framework for combatting and preventing AI-enabled Child Sexual Exploitation

Child sexual exploitation is one of the most urgent challenges of the digital age. AI is rapidly changing both how these harms emerge across the industry and how they can be addressed at scale. 

At OpenAI, we have built and continue to strengthen safeguards to prevent misuse of our systems, and we work closely with partners like the National Center for Missing and Exploited Children (NCMEC) and law enforcement to improve detection and reporting. This work has helped surface where stronger, shared standards are needed across the industry.

Today, we’re introducing a policy blueprint that outlines a practical path forward for strengthening U.S. child protection frameworks in the age of AI. This blueprint reflects and incorporates feedback from several leading organizations and experts across the child safety ecosystem, including NCMEC, the Attorney General Alliance and its AI Task Force co-chairs—North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown—and Thorn to ensure it reflects their priorities and can facilitate more effective collaboration to prevent harm to children.

The blueprint focuses on three key priorities: modernizing laws to address AI-generated and altered CSAM, improving provider reporting and coordination to support more effective investigations, and building safety-by-design measures directly into AI systems to prevent and detect misuse.

No single intervention can address this challenge alone. This framework brings together legal, operational, and technical approaches to better identify risks, accelerate responses, and support accountability, while ensuring that enforcement authorities remain strong as technology evolves.

Together, these steps enable the industry to address child safety earlier and more effectively. By interrupting exploitation attempts sooner, improving the quality of signals sent to law enforcement, and strengthening accountability across the ecosystem, this framework aims to prevent harm before it happens and help ensure faster protection for children when risks emerge.

“As Co-Chairs of the Attorney General Alliance's AI Task Force, we welcome this blueprint as a meaningful step toward aligning the technology sector's child safety practices with the enforcement realities our offices confront every day. We are particularly encouraged by the framework's recognition that effective GenAI safeguards require layered defenses — not a single technical control, but a combination of detection, refusal mechanisms, human oversight, and continuous adaptation to emerging misuse patterns. This mirrors what we see in practice: the threat evolves constantly, and static solutions are insufficient. Getting the prevention architecture right upstream is the single highest-leverage investment the industry can make in child safety.

 Ultimately, the strength of any voluntary framework depends on the specificity of its commitments and the willingness of industry to be held accountable against them. We look forward to continued partnership with OpenAI, NCMEC, and our fellow Attorneys General to ensure these recommendations translate into durable protections for children.”

State Attorneys General Jeff Jackson (North Carolina) and Derek Brown (Utah), Co-Chairs of the AI Task Force of the Attorney General Alliance. 

“The Attorney General Alliance is leading the way in protecting young people online by bringing together attorneys general, industry leaders, nonprofits, and global partners to advance practical, forward-looking solutions on AI and digital safety. Through collaboration and innovation, AGA is setting a strong standard for how we safeguard youth while responsibly embracing emerging technologies. We applaud OpenAI’s continuing commitment to safety and engagement with AGA and attorneys general in developing a highly valuable blueprint for child safety.”

—Karen White, Executive Director of Attorney General Alliance

“Generative AI is accelerating the crime of online child sexual exploitation in deeply troubling ways - lowering barriers, increasing scale, and enabling new forms of harm. But at the same time, the National Center for Missing & Exploited Children (NCMEC) is encouraged to see companies like OpenAI reflect on how these tools can be designed more responsibly, with safeguards built in from the start. No single organization, business or sector can address this alone. We remain committed to working with partners across industry, government, and the child protection community to advance solutions that reduce harm and better support children’s safety.”

—Michelle DeLaune, President & CEO, National Center for Missing & Exploited Children