

California Attorney General Rob Bonta, a Democrat who is leading the litigation, called the move a “blatantly illegal” threat by Trump to yank funds used to improve roads and prepare for emergencies if states do not use their resources to support immigration enforcement.
Yeah, I suspect that this is going to be shot down in the Supreme Court. The power of the purse in the form of the federal government withholding funds has some case law, and I don’t think that it supports this.
https://en.wikipedia.org/wiki/Power_of_the_purse
South Dakota v. Dole upheld the ability of the federal government to withhold highway funds over state drinking age laws, but one of the necessary elements the Supreme Court identified there was that it had to directly relate to the federal interest in the project, and this doesn’t do that.
National Federation of Independent Business v. Sebelius had SCOTUS shoot down the federal government withholding Medicaid funds unless the states expanded their coverage as being coercive, and I expect that this will probably fall into the same camp.
AI voice synth is pretty solidly-useful in comparison to, say, video generation from scratch. I think that there are good uses for voice synth — e.g. filling in for an aging actor/actress who can’t do a voice any more, video game mods, procedurally-generated speech, etc — but audiobooks don’t really play to those strengths. I’m a little skeptical that in 2025, it’s at the point where it’s a good drop-in replacement for audiobooks. What I’ve heard still doesn’t have emphasis on par with a human.
I don’t know what it costs to have a human read an audiobook, but I can’t imagine that it’s that expensive; I doubt that there’s all that much editing involved.
kagis
https://www.reddit.com/r/litrpg/comments/1426xav/whats_the_average_narrator_cost/
That’s actually lower than I expected. Like, if a book sells at any kind of volume, it can’t be that hard to make that back.
EDIT: I can believe that it’s possible to build a speech synth system that does do better, mind — I certainly don’t think that there are any fundamental limitations on this. It’d guess that there’s also room for human-assisted stuff, where you have some system that annotates the text with emphasis markers, and the annotated text gets fed into a speech synth engine trained to convert annotated text to voice. There, someone listens to the output and just tweaks the annotated text where the annotation system doesn’t get it quite right. But I don’t think that we’re really there today yet.