December 29, 2025.
In June 2025, Florida Governor Ron DeSantis vetoed a bill that would have studied how artificial intelligence would affect employment in his state. Six months later, he proposed sweeping AI restrictions based on testimony from two parents suing an AI company.
This isn’t an outlier. It’s the template.
Across the United States, governments at every level are racing to regulate AI, and they’re doing it without funding research, consulting technologists, or waiting for evidence about what actually works. The result is a wave of legislation driven by headlines, anecdotes, and borrowed templates rather than systematic understanding of the systems being regulated.
We’re watching governments play governance on story mode, regulating the narrative of AI rather than the technology itself. And the consequences, for innovation, for rights, for the communities these laws claim to protect, are predictable and preventable.
The numbers tell the story
In 2025 alone, lawmakers across all 50 U.S. states introduced more than 1,080 AI-related bills. That’s an extraordinary level of legislative activity for a technology most legislators don’t use, don’t understand, and haven’t studied.
Of those 1,080+ bills, only 118 became law, a passage rate of just 11 percent. What does that tell you? It tells you that most of this legislative energy isn’t about solving real problems. It’s about being seen solving problems, about responding to moral panic with the appearance of action.
The bills that did pass fall into predictable categories:
Disclosure requirements (tell users when they’re talking to AI)
Restrictions on “high-risk” AI in employment, housing, healthcare, and insurance
Bans on AI-generated deepfakes in political ads
Age verification and parental consent rules for minors using AI chatbots
Prohibitions on AI-delivered mental health counseling
Almost none of these laws were preceded by publicly funded research examining:
Whether the harms they target are actually occurring at scale
What the base rates are (how many AI interactions cause harm vs. benefit)
What AI is replacing (and whether the replacement is better or worse)
What compliance costs will be, and who will bear them
Whether the proposed restrictions will achieve their stated goals
Governments are writing rules for systems they haven’t studied, targeting harms they haven’t measured, and imposing costs they haven’t calculated.
Copy-paste governance and the race to regulate
Even more revealing is how these bills are written. Lawmakers don’t start from scratch. They copy.
Colorado passed the first comprehensive AI law in May 2024, the Colorado Artificial Intelligence Act (CAIA), modeled loosely on the EU AI Act. It created a “high-risk” classification system for AI used in consequential decisions (employment, credit, housing, healthcare), imposed compliance burdens on developers and deployers, and required annual impact assessments.
Governor Jared Polis signed it, then immediately issued a rare signing statement warning that the law might “stifle innovation” and calling for federal preemption to avoid a state-by-state “patchwork.” Within a year, Colorado tried to delay its own law’s implementation, saying it needed more time to figure out whether it had written a workable rule.
But by then it was too late. Connecticut, Virginia, New York, Rhode Island, Michigan, Illinois, and Texas had already introduced bills modeled on Colorado’s framework. Most failed or were vetoed. Virginia’s governor vetoed his state’s version, citing the same concerns Polis had raised. Connecticut’s bill stalled. New York’s is pending.
The pattern is unmistakable: One state passes a “landmark” AI law without clear evidence it will work. Other states copy it. Governors and attorneys general immediately start calling for federal preemption because they realize state-by-state AI rules are unworkable. But the copying continues anyway, because being first or tough on Big Tech is politically valuable even if the policy is incoherent.
The same dynamic is playing out with social media age verification laws. Florida, Georgia, Louisiana, Nebraska, Virginia, and more than a dozen other states have passed or proposed laws requiring platforms to verify users’ ages and obtain parental consent for minors. The bills are nearly identical. Many were written by the same advocacy groups and recycled across state legislatures with minimal adaptation to local needs or constitutional concerns.
Federal judges have already blocked Florida’s law as likely unconstitutional, violating minors’ First Amendment rights. But that hasn’t stopped other states from passing near-copies of the same bill, ensuring years of expensive litigation over rules that were never evidence-based to begin with.
The research gap: What governments refuse to know
Here’s what makes Florida’s AI story especially revealing: The state had the chance to gather real evidence, and chose not to.
House Bill 827 would have required Florida’s Department of Commerce to study:
Which jobs were most at risk from AI-driven automation
What regions and demographics faced the greatest displacement
How wages would be affected
What workforce training programs Florida would need
The bill passed the Legislature with only one dissenting vote. DeSantis vetoed it, claiming research would be “obsolete by the time it was published.”
Then he turned around and proposed AI restrictions, banning AI therapy, limiting AI chatbots for minors, blocking data center subsidies, based on two parents’ lawsuit against Character.AI.
Florida is not unique. Across the country, states are introducing AI bills at breakneck speed while refusing to fund the research that would tell them whether those bills are necessary, effective, or harmful.
Of the 1,080+ AI bills introduced in 2025, fewer than 50 were research bills or study commissions. Most of those were symbolic, asking agencies to “examine” AI without funding staff, data collection, or peer review. The few that included real funding were often vetoed or amended to strip the research budget.
Governments would rather regulate AI than understand it. Understanding takes time, requires expertise, and might produce answers that complicate the preferred narrative. Regulation is faster, more visible, and easier to sell as “doing something” about a problem.
What evidence-based AI governance would actually look like
Let’s be clear: KlueIQ is not anti-regulation. We’re pro-evidence. We build games and systems that simulate crime, investigation, and decision-making precisely because understanding complex systems requires modeling them, not just reacting to headlines.
If governments were serious about AI governance, serious about protecting people and enabling innovation, here’s what they would do:
1. Fund research before regulating
Before writing a single AI restriction, fund multi-year studies examining:
What AI systems are actually being deployed, by whom, and for what purposes
What harms are occurring, at what scale, and to which populations
What benefits are occurring (what AI is replacing, and whether the replacement is better)
What existing laws already cover (consumer protection, civil rights, fraud, negligence)
What gaps exist that genuinely require new rules
Publish the data. Make the methods transparent. Update the research as the technology evolves.
2. Consult technologists, not just advocates
Every AI regulatory process should include:
Engineers and researchers who build AI systems (not just executives who market them)
Ethicists and social scientists who study algorithmic harm and fairness
Civil rights and consumer advocates
Affected communities who will live with the rules
Economic analysts who can assess compliance costs and distributional effects
If your regulatory roundtable features only grieving parents, law enforcement, and politicians, you’re not doing governance. You’re doing grief theater.
3. Pilot and test before mandating
Instead of passing sweeping bans or compliance regimes that take effect statewide, create regulatory sandboxes:
Let companies test AI systems under controlled conditions with close oversight
Measure what happens, both harms and benefits
Adjust rules based on real-world performance, not hypothetical risks
Texas, Delaware, and Utah have begun experimenting with AI sandboxes. This is the right direction: learn by doing, not by decree.
4. Update rules as evidence changes
AI is not static. Governance shouldn’t be either. Build sunset provisions into AI laws: Every restriction expires after five years unless renewed based on updated evidence. Require annual reporting on compliance costs, enforcement actions, and measurable outcomes.
If a rule isn’t working, if it’s not reducing the harms it targeted, or if it’s imposing costs far beyond its benefits, change it. Governing AI requires intellectual humility and adaptive capacity, not “landmark legislation” that becomes obsolete the day it’s signed.
5. Prefer federal standards to state patchworks
This is not about centralization for its own sake. It’s about avoiding a system where companies face 50 different AI compliance regimes, none of them grounded in research, many of them contradictory.
Federal standards, ideally developed through a transparent, evidence-based process involving researchers, industry, civil society, and affected communities, create a level playing field and allow innovation to scale. State experimentation can inform federal policy, but state-by-state mandates for a national (and global) technology are a recipe for chaos.
The cost of governing by anecdote
When governments regulate AI without research, three things happen:
First, the rules miss the actual problems. Florida’s AI Bill of Rights targets chatbots and data centers, but does nothing about the economic displacement DeSantis refused to study. It’s solving for the headline case (a tragic teen suicide) while ignoring the systemic challenge (what happens to Florida workers when AI automates their jobs).
Second, the rules create compliance costs that fall hardest on smaller players. Large AI companies can afford legal teams to navigate 50 state laws. Startups, researchers, and open-source developers can’t. The result is regulatory capture: rules written to “crack down on Big Tech” end up protecting Big Tech by raising barriers to entry.
Third, the rules undermine public trust. When legislation is visibly theatrical, when it’s clear that lawmakers didn’t do the homework, didn’t consult experts, and didn’t think through the consequences, people stop believing that government can govern technology responsibly. That creates space for either industry self-regulation (which lacks democratic accountability) or federal preemption (which, if done badly, produces the same problems at a larger scale).
A different model is possible
At KlueIQ, we build systems that require understanding how things actually work, how investigations unfold, how evidence accumulates, how decisions get made under uncertainty. We don’t simulate crime by copying headlines. We model systems.
Governments can do the same. They can fund research. They can consult experts. They can test rules before mandating them. They can update laws as evidence changes. They can distinguish between what one tragic case involved and what the technology actually does at scale.
The alternative, decree first, research never, isn’t governance. It’s live-action role-play with people’s jobs, rights, and futures.
Florida’s AI story is a warning, not an anomaly. Across the country, governments are choosing narrative over evidence, anecdote over analysis, theater over systems thinking.
We can do better. We have to.