Back in May, I wrote a piece titled Will Congress Legalize Mark Zuckerberg As Your Therapist?, pointing at a piece of legislation included in Trump’s flagship tax bill that would bar states from regulating artificial intelligence and automated decision-making systems. Such legislation could stop attempts to ensure that therapy chatbots disclose they are not human, or blocking rent-fixing algorithms, or rules that mandate health providers allow customers to talk to human beings in customer service, or prohibitions against the use of AI models to advertise to gambling addicts. But mostly, it was a bill to ensure that no one would block big tech players from doing whatever it is they want to do.
Here’s Bloomberg on the firms behind the measure:
The measure was the top priority for major technology companies including Microsoft Corp. and Meta Platforms Inc., as well as venture capital firms like Andreessen Horowitz, which back other powerful players. Trump allies in Silicon Valley, including venture capitalist Marc Andreessen, defense tech firm Anduril Industries Inc. founder Palmer Luckey and Palantir Technologies Inc. co-founder Joe Lonsdale all advocated for including the restriction…
Commerce Secretary Howard Lutnick threw his support behind the AI measure, as well, calling it imperative for national security and a step toward quashing blue-state efforts to pass comprehensive AI legislation.
Well we got some good news. Last night, that provision was stripped out of the bill by a 99-1 vote. It was killed by a combination of the Democratic caucus and Republican Senators Marsha Blackburn and Josh Hawley, and Trump advisor Steve Bannon.
And how it happened is a useful lesson in lawmaking under Trump. Let’s start with what the law actually did, which was to ban states and localities from being able to regulate AI or automated decision making. While the version that got on the floor last night was slightly different for procedural reasons, here’s the gist.
This bill would prohibit state regulation of AI and “automated decision systems” for ten years, which is defined as systems that “materially influence or replace human decision making.”
This is a proposal that Texas Senator Ted Cruz particularly has touted, largely coming out of the soup of a lobbyists in and around the the big tech Abundance Institute and a variety of libertarian Koch funded think tanks, as well as the small number of libertarian Democrats like Colorado Governor Jared Polis.
It’s not a well-written provision, because it’s so vague and all-encompassing. Legal analysts seem to think the definitions give the law a fairly broad legal meaning, so it would prohibit a wide swath of laws on the books and proposals for new ones. If you read the whole thing, it’s a bit ambiguous what state lawmakers and can’t do. But we do know a few things. First, it is a bill that tells state lawmakers they can’t regulate in a new area of general purpose technology about which we know little. It’s a bit like Section 230 of the Communications Decency Act, which was vague when written in 1996, but ended up shielding most online activity from legal scrutiny.
And second, it has an exception for lawmaker who want to remove licensing requirements or deregulate. If this provision gets into law, state lawmakers would have a tough time trying to draw new lines around what is and isn’t a chatbot therapist, how to do licensing or how to foster liability for services that accidentally tell teenagers to kill themselves. But they would have an easy time removing licensing requirements that limit Meta.
Such legislation, which was written to last ten years, set the stage for a catastrophic expansion of price-fixing, algorithmic collusion, and fraud via the use of artificial intelligence models. The version that got on the floor had some protections for child safety.
Democrats were unified against it, as much for partisan reasons as ideological ones. But there was also significant backlash from Republican state officials against this legislation, because many states have laws that regulate automated or AI systems. Last month, 40 GOP and Democratic attorneys general sent a letter opposing the provision, so you would think it would die. However Congress is a world apart from local concerns, and the amount of money put forward to Republican members of Congress for supporting something like this makes it hard to resist.
Still, opponents rallied. Tennessee Senator Marsha Blackburn, who is an iconoclast, “raised concerns the measure would block her home state’s Elvis Act, a law that prohibits the non-consensual use of AI to mimic musicians’ voices.” She was also concerned that the bill would disallow laws meant to protect kids. A number of other GOP Senators, like big tech foe Josh Hawley, were also worried. So Ted Cruz cut a deal with Blackburn, coming together with a “compromise” that would cut the moratorium to five years and include some ability to regulate. It’s likely this compromise was authored by Meta or one of the other big tech firms, because in some ways, it loosened protections for children. That overreach burned the compromise.
After Blackburn cut her compromise deal, there was an outcry by a host of child safety and online advocacy groups, which led to Bannon speaking with Blackburn. And she ended up opposing the full provision, either because of Senate procedure, or because the compromise was actually worse than promised. And when she flipped, Cruz realized he would lose, so he sided with her, and the provision went down 99-1.
So what do we learn from this episode? First, we need public financing of elections. It’s become increasingly clear that financial dependencies make it almost impossible to make good policy. There were probably two dozen Republican Senators who would have opposed this provision openly if they didn’t have to rely solely on corporate funds for elections. In truth, this provision never should have been proposed in the first place, let alone have a bitter fight waged to block it. But the financing for elections creates awful incentives.
Second, relatedly, GOP politics is deeply authoritarian in its internal structure; if you don’t have a ten figure net worth or some level of fame, it’s almost impossible to make independent decisions. That’s why characters like Hawley and Blackburn matter in a GOP dominated legislative process, they actually have leverage built off of years of political work to build out a socially conservative constituency. Another figure who matters is Steve Bannon, who has a big audience and the ear of the President. He is famous, and so does not need to do the bidding of powerful corporate interests, so long as they maintain other sources of power.
Third, the GOP is, unfortunately, still the party of George W. Bush. This kind of raw deregulatory legislation in an area like automated decision-making and AI is what Bush-era Republican establishment figures would enact. One might conclude that the Republicans have changed, since it didn’t end up getting enacted. But it was a fight up until the last-minute. And we’ve seen endless amounts of scandals from big tech and AI, which was not true during the Bush era. That it came as far as it did speaks to the basic assumptions still baked into the GOP. Conservative observers are noting that the GOP, with the opportunity for a populist turn, has fumbled.
Finally, don’t be fooled by the lopsided vote, this AI regulation ban was much closer to being enacted into law than that. The attempt to eliminate the regulation of automated decision and AI systems will return. Big business is going to have an open checkbook going forward, amounts of money that are unfathomable, to enact their agenda. Ultimately, money buys time on TV, but it can’t buy votes. And that’s the reason that this AI regulation moratorium went down.