A Colorado statute that was supposed to take effect next month would take a regulatory mechanism familiar from civil rights law—the requirement that private actors examine outcomes by race and adjust their conduct when the numbers come out wrong—and push it into territory the Supreme Court so far has avoided presiding over: artificial intelligence.
Colorado Senate Bill 24-205, currently under challenge in federal court by Elon Musk’s xAI and the Department of Justice, was sold to Coloradans as a shield against “algorithmic discrimination.” This is a term that, in the law’s own usage, does not mean what an ordinary citizen would assume. It does not refer to an AI system designed to disadvantage a person on the basis of race or sex. It means an AI system whose outputs, in the aggregate, fail to mirror the demographic ratios the state finds politically congenial.
Under the regime envisioned by the bill, developers and deployers of “high-risk” AI—the kind used in mortgage lending, college admissions, and hiring—must produce disclosures, perform impact assessments, and take “reasonable care” to prevent unintentional disparate impact. They must, in short, look at the outcomes of their models and, where the outcomes deviate from preferred racial targets, intervene to bring them into line.
It’s an age-old tale of the same civil rights regime that has existed in America since the late 1960s: state-mandated racial quotas, dressed up in the language of anti-discrimination.
The most damning feature of Colorado’s bill is its carveout for discrimination in circumstances approved by the state. Per the law’s text, algorithmic discrimination can only be allowed if it is intended to “increase diversity” or “redress historic discrimination.” The same conduct that would expose a company to liability if undertaken for one purpose—disadvantaging minorities—becomes legally protected when undertaken for another—disadvantaging whites. The state has thus written, into black-letter law, a blatant double standard that violates the bedrock principle of equality under the law.
Anyone who recalls the Supreme Court’s reasoning in Students for Fair Admissions v. Harvard will recognize what is happening. In zero-sum competitions—for finite mortgages, finite admissions slots, or finite job offers—an algorithm engineered to favor one group is necessarily an algorithm engineered to disfavor others. The Fourteenth Amendment does not permit a state to mandate that arrangement, and it does not permit a state to grant immunity for one direction of discrimination while punishing the other. As Assistant Attorney General Harmeet Dhillon put it, “Laws that require AI companies to infect their products with woke DEI ideology are illegal.” She is correct on the law and, more importantly, correct on the principle.
Reasonable Democrats who had supported the bill are experiencing buyer’s remorse. Amid sluggish job growth and rumblings of discontent from Colorado’s business community, state legislators are working to introduce a “slimmer version” of the bill. The state’s governor, Jared Polis, is eager for the matter to be out of his hands, going so far as to welcome the Trump administration’s move to pre-empt state laws by providing a federal framework. “I generally agree with the direction the White House is taking to pre-empt state laws on AI,” said Polis in an interview.
To be fair, the problem goes beyond Colorado. In 2023, New York City passed a law requiring businesses to conduct an “annual bias audit” of all AI tools related to employment, to detect any disparate impact against protected groups. In 2024, Illinois amended its Human Rights Act, making it a civil rights violation for any employer to use AI that has a discriminatory effect. In 2025, California made it unlawful for employers to use any “automated-decision system” that produces a disparate impact on protected groups in hiring, promotion, or firing. Across the country, Democratic officials are sending a clear message to the AI industry: The Civil Rights era is not dead, whatever the Supreme Court might say.
Ask a normal American what worries him about artificial intelligence, and he will not say “disparate impact ratios in mortgage approval algorithms.” He will say his face, his voice, and his likeness are being lifted from social media and weaponized into deepfakes. He will say he does not understand what data is being harvested from him, or by whom, or to what end. He may express some worry about whether end-to-end encryption, the previously-impenetrable barrier between his private communications and a thousand interested parties, may soon be broken by a superintelligent machine. He may have heard about AI-boosted phone scams and say he’s concerned that what sounds like his son asking for money is a stranger looking to rip him off. None of these concerns are addressed by Colorado’s attempt to write racial quotas into AI algorithms.
The only stab at AI regulation which seems halfway sensible is the administration’s National Policy Framework for Artificial Intelligence. Released in March, the four-page legislative blueprint has the virtue of being aimed at the right problems. Its seven pillars include child protection, parental controls, age assurance, and meaningful safeguards against exploitation and self-harm.
Most importantly, the White House framework treats free speech and protection from compelled ideological output as constitutional baselines rather than obstacles. It rejects the creation of a sprawling federal AI regulatory behemoth in favor of sector-specific oversight. And it calls for the preemption of state laws that impose “undue burdens” on developers—burdens of precisely the sort Colorado was about to impose.
Readers may object: Isn’t the correct conservative position to favor the states against the federal leviathan? In the abstract, yes. In reality, the question of whether state or federal authority better serves liberty often depends on who is exercising each. There are seasons in American life when Washington is the protector of constitutional rights against state overreach.
This is one of those times. The blue-state legislatures most eager to regulate AI have shown, in Colorado as elsewhere, that they will use their power to push an ideology that the Supreme Court has rejected, and from which the public has repeatedly withheld its endorsement.
For now, the executive and legislative branches of the federal government are controlled by officials who understand the Equal Protection Clause as written, who treat free speech as a constraint on government rather than a regrettable inconvenience, and who are willing to intervene when a state attempts, yet again, to conscript private firms into a race-quota regime.
Read the full article here

