Today鈥檚 push for a 10-year artificial intelligence moratorium has been framed in legislation as a 鈥渢emporary pause鈥 on state AI and algorithmic regulation. Proponents claim this is necessary to prevent 鈥渁n unworkable patchwork of disparate and conflicting state AI laws鈥 and preserve America鈥檚 lead over China in AI.
But this ignores a more immediate and dangerous reality: The moratorium would unnecessarily paralyze the most promising and urgently needed online reforms in America today鈥攖hose designed to protect children online.
Over the past two years, states have led the nation in passing innovative laws to combat various exploitative uses of AI like deepfake child sexual abuse imagery or manipulative recommendation algorithms. Thirty-eight states have updated their child sexual abuse material laws to include AI-generated content.
If the moratorium becomes law, though, the remaining 12 states would be blocked from enacting similar protections if they accept new or reobligated Broadband Equity, Access, and Deployment program funding鈥攁 $42.45 billion program all 50 states submitted proposals for funding from.
Perhaps even more concerning, all 50 states could be barred from enforcing existing laws under the same conditions.
This moratorium is not a symbolic gesture of unity鈥攊t carries real, enforceable consequences. It threatens to claw back the entirety of a state鈥檚 BEAD funds, including already-obligated dollars, unless the state agrees to halt both the enactment and enforcement of any law regulating AI or 鈥渁utomated decision systems.鈥
That latter term is defined so broadly it sweeps in virtually all modern algorithms, which form the backbone of nearly every online platform and digital interaction. As a result, states could be blocked from implementing laws on age verification, content moderation, algorithmic recommendations, and 鈥渘udifying鈥 and voice-cloning apps鈥攑recisely the tools being used to target and harm children online.
Many of the most innovative child safety laws in the U.S. today began in the states.
Arizona鈥檚 HB 2175 ensures medical insurance claims aren鈥檛 completely outsourced to algorithmic tools by requiring insurance medical directors to review claim denials. Tennessee鈥檚 ELVIS Act protects artists鈥攊ncluding minors on platforms like Instagram鈥攂y banning AI-generated replicas of their voices without their consent.
New York鈥檚 SAFE for Kids Act requires platforms to obtain parental consent before subjecting minors to addictive algorithmic feeds or overnight access. Florida recently passed a law barring under-14s from social media altogether鈥攁 response to mounting data on how platform design harms mental health.
Supporters of the moratorium argue that you do not need laws that address technology to govern it. These laws are but a few examples of how kids would pay the price for that wishful thinking.
States have also taken the lead on age verification. Twenty-four states have passed laws requiring adult websites to verify the age of users. Utah and Texas have enacted laws that apply this logic to app stores, requiring age-appropriate design and parental consent.
These policies are narrowly tailored, data-minimizing, and often bipartisan鈥攑recisely the kind of innovation Congress claims to want but has so far failed to deliver. Outside of Sen. Ted Cruz鈥檚 bipartisan TAKE IT DOWN Act, which addresses nonconsensual deepfake images, there is no comprehensive federal AI child safety law. Yet under the moratorium, states would be barred from filling this gap.
Supporters of the moratorium claim it prevents states like California from setting national rules for AI. But in practice, California鈥攚ith a $325 billion budget鈥攃an afford to forgo BEAD funds and continue regulating AI. It鈥檚 smaller, often red, states鈥攕ay one with a $4 billion budget鈥攖hat can鈥檛.
The result is a perverse dynamic where progressive states are free to impose EU-style rules, while conservative states are blocked from passing more balanced, locally grounded protections.
This isn鈥檛 theoretical. The states being handcuffed by the moratorium are the same ones that banned TikTok on government devices long before Congress acted. They are the states creating the political and policy momentum necessary for federal change. Removing that power means removing the engine of American AI accountability鈥攁nd sacrificing children鈥檚 safety in the process.
The AI moratorium doesn鈥檛 stop China. It stops states like Texas, Tennessee, and Utah. It doesn鈥檛 defend innovation. It defends Silicon Valley from scrutiny. And it doesn鈥檛 protect kids. It protects Big Tech.