The Federal AI Policy Framework has been released. Well, it is a four page outline. Mostly it just reiterates existing such outlines. But that is four more pages than we had previously. It includes the beginnings of actual policy proposals, some of which are highly welcome and actively good.
Perhaps most importantly, it affirms that we are a Republic in which the way we Do Policy is we pass a law through Congress specifying what we do, and that we need to actually Do Policy alongside trying to ban others who might attempt to Do Policy.
It also acknowledges that, as a practical political matter, ‘attach the moratorium banning all AI state laws’ cannot be simply attached to a few child safety rules.
I was especially heartened by the call for protections for free speech that guard in particular against the Federal Government, especially given what else is happening. That doesn’t fill the role of other things but it is most welcome.
Alas, I couldn’t support even a strong implementation of this proposal as written, because it overrides state laws in the most important places and replaces them with essentially nothing.
As in, this is not, as written, a way to deal with frontier, catastrophic or existential risks, and only mentions them in the context of ‘national security’ concerns, and has no mention of any transparency requirements. This is largely an attempt to kill SB 53 and the RAISE Act without substituting anything in return.
However, if this were to include either an exception for classes of laws addressing frontier risks, and a resulting bill was otherwise sufficiently well written and implemented, I could see finding it acceptable.
So let’s see what it says.
My comments are on the third nested level and in sections with ‘Overall.’
The most surprising thing was an excellent section on free speech protections that in particular focused on protecting us from the Federal Government’s restrictions on speech. That is badly needed and most welcome.
As expected, the solution to most problems is to ignore them and hope existing law happens to work well, including in several places where they call for letting things play out in court, actively declining to choose policy for the AI era at all.
I’d say you should support this if and only if you think ‘ban states from regulating AI without actually regulating most of AI aside from thinking of the children, and then probably never doing so and hoping for the best’ is a good plan.
Protecting Children and Empowering Parents.
Take It Down Act.
Okay, sure.
“Congress should empower parents and guardians with robust tools to manage their children’s privacy settings, screen time, content exposure, and account controls.”
That’s not even AI, although it is a good idea.
“Congress should establish commercially reasonable, privacy protective, age assurance requirements (such as parental attestation) for AI platforms and services likely to be accessed by minors.”
That sounds a lot like age verification requirements for access to AI services, but I have been assured that this means AI enabled detection, and that Congress and industry understand what this means.
Assuming that is indeed what this means, and it has minimum platform sizes to avoid undue burdens, then okay, acceptable. Now let’s do the same for social media.
“Congress should require AI platforms and services likely to be accessed by minors to implement features that reduce the risks of sexual exploitation and self-harm to minors.”
What does that mean exactly? Devil is in the details.
“Congress should affirm that existing child privacy protections apply to AI systems, including limits on data collection for model training and targeted advertising.”
Affirmations are fine, yeah, okay, sure.
“Congress should avoid setting ambiguous standards about permissible content, or open-ended liability, that could give rise to excessive litigation.”
Agreed but not sure why that is part of a framework.
“Congress should ensure that it does not preempt states from enforcing their own generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI.”
Agreed again, but this is remarkably narrow as a thing to avoid.
Overall: Most of this is a bunch of applause lights with no attempt to actually figure out implementation or tackle the hard questions. The concrete request is age assurance technology on AI platforms, which seems like the right compromise if done correctly.
Safeguarding and Strengthening American Communities
No increased electricity costs from AI data centers.
Hyperscalers have previously agreed to this. Good symbolic move.
Streamline federal permitting for AI infrastructure.
“Congress should augment existing law enforcement efforts to combat AI-enabled impersonation scams and fraud that target vulnerable populations such as seniors.”
One of those remarkably narrow and specific targets, but okay, sure.
“Congress should ensure that the appropriate agencies within the national security enterprise possess sufficient technical capacity to understand frontier AI model capabilities and any associated national security considerations and establish plans to mitigate potential concerns, including through consultation with frontier AI model developers.”
Anthropic’s situation has emphasized the need for agencies to understand AI models, especially at the frontier, since it is clear from DoW’s statements there that DoW has no idea how any of this works.
So far, the moves during this administration to ‘mitigate potential concerns’ have been attempts not only not attempted to mitigate actual concerns, they have been attempts to directly cause more concerns and make the situation worse.
Provide AI resources to small businesses.
Give us money.
Overall: These seem like good ideas if implemented.
Respecting Intellectual Property Rights and Supporting Creators.
Allow courts to handle copyright.
I’m confused why you wouldn’t want to clarify this? Unless it’s that Congress would never agree to the clarifications they’d want.
Consider enabling licensing frameworks or collective rights systems to negotiate compensation, without incurring antitrust liability, but this should not address when or whether such licensing is required.
I like enabling bargaining, and would love to go further there, but I notice I am again confused by actively not wanting to clarify ambiguity in law.
My presumption is again that they don’t like what Congress would decide?
Consider protecting people from unauthorized distribution of their voice, likeness or other identifiable attributes, with the standard free speech exceptions and protections against stifling speech online.
Again with ‘consider.’
I do think people should be protected from this.
Congress should monitor development of copyright precedents to fill in potential gaps and provide protection for content creators.
I assume this is an attempt to not lose Senator Blackburn.
Overall: This seems like an attempt to head off something.
In particular, that something is Blackburn’s Trump America AI Act, or other similar bills that would empower copyright holders far more.
Preventing Censorship and Protecting Free Speech
“Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas.”
The executive branch is currently doing exactly this.
Perhaps they should stop doing it without needing a law.
But also, yes, sure, pass a law as well.
“Congress should provide an effective means for Americans to seek redress from the Federal Government for agency efforts to censor expression on AI platforms or dictate the information provided by an AI platform.”
Agreed.
Overall: Good. Great, even. Let’s stand by these principles.
Enabling Innovation and Ensuring American AI Dominance
Establish regulatory sandboxes.
Provide resources to make federal datasets available.
“Congress should not create any new federal rulemaking body to regulate AI, and should instead support development and deployment of sector-specific AI applications through existing regulatory bodies with subject matter expertise and through industry-led standards.”
This is an unframework and essentially calls for not regulating AI, except incidentally through rules not intended for AI.
Sector-specific regulation cannot handle existential risk or superintelligence concerns, so this is saying do nothing about that.
Overall: First two are fine. Third is saying we will completely ignore the most important questions, and deal with others in haphazard fashion or through enabling industry regulatory capture.
Educating Americans and Developing an AI-Ready Workforce
‘Use non-regulatory methods’ to have education incorporate AI training.
How are you going to do that, exactly?
I presume it would happen on its own, though, so not that concerned.
Expand Federal efforts to study trends in workforce realignment.
“Congress should bolster capabilities at land-grant institutions to provide technical assistance, launch demonstration projects, and develop AI youth development programs.”
Overall: Okay, I guess. Mostly sounds useless. I smell pork.
We can pause here for a second. Those are what the White House calls its six objectives, as opposed to the actual main objective, which is the seventh: The Moratorium on state action.
The active parts of the supposed ‘Federal Framework’ are not doing much. A lot of it is an attempt to forestall other action, while doing essentially nothing, along with a few low-level giveaway programs I expect to accomplish little.
The main things we get that matter are welcome infrastructure help that doesn’t require a framework, and child protections, primarily age assurance.
The big red flag is that this framework actively rules out creating a method to deal with frontier AI risks, especially existential risks, while attempting in #7 to ban states from possibly doing so. The exception is that agencies should understand the ‘national security implications’ and attempt to mitigate that, in contrast to the current attempts to exacerbate those risks coming from the Department of War (DoW).
Certainly such domain skilling up is welcome, but I have zero expectation this will be used to seriously attempt to understand or deal with the important frontier risks. Notice for example that there is zero talk of any transparency requirement whatsoever.
And then we get to the reason this framework exists, which is to try and prevent states from doing anything at all related to AI for any reason, with notably rare and harmless exceptions.
Establishing a Federal Policy Framework, Preempting Cumbersome State AI Laws
“Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones.”
In practice what they mean is all state AI laws with notably rare exceptions detailed next, and ‘minimally burdensome’ means don’t interfere at all, unless it is the Federal Government doing the interfering with companies they do not like, a la what is happening with DoW.
This national standard should respect key principles of federalism and not preempt:
(1) Preserve: The traditional police powers retained by the states to enforce laws of general applicability against AI developers and users, including particular laws to protect children, prevent fraud, and protect consumers.
Yes, at least AI isn’t an automatic shield against existing laws.
Default outcome here is that this means the existing patchwork, not even designed for AI, is what gets applied without modification, at the whims of courts and agencies that don’t understand what any of it means. And that on top of that laws that apply everywhere else in order to also apply to AI, increasing overall regulatory burden.
(2) Preserve: State zoning laws, including state authorities, to determine the placement of AI infrastructure.
I presume not doing this would have been unworkable for many reasons, including a political conniption fit.
(3) Preserve: Requirements governing a state’s own use of AI, whether through procurement or services they provide like law enforcement and public education.
Good catch, you do pretty much have to do that.
So yeah, in terms of what AI actually does, it’s all verboten unless you write a general case law that happens to also cover AI.
“Preemption must ensure that State laws do not govern areas better suited to the Federal Government or act contrary to the United States’ national strategy to achieve global AI dominance.”
It sure seems like they already aren’t allowing this, but they’re going to super duper not allow this.
Prevent: “States should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications.”
This Federal Framework does not regulate AI development. At all.
So yes, their offer is nothing.
Prevent: “States should not unduly burden Americans’ use of AI for activity that would be lawful if performed without AI.”
I worry this leaves the door open for de facto occupational licensing, which is essentially the worst policy. Indeed, the entire framework seems to leave this door open (e.g. ‘to practice medicine in any form requires…’)
It would be really awesome if the language here was well-chosen and did not leave that door open. As in, make it so that any license to do something is a problem for the human, not the AI, and the act of using AI for this purpose is never barred by this, as broadly as possible.
At the same time, it ignores Levels of Friction concerns. There are plenty of things that are good and fine with insufficient scale or with marginal costs, which become not fine when AI is present.
Prevent: “States should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models.”
Nothing in this framework does that on a Federal level.
So yes, again, their offer is nothing.
Thus, what we are saying is, the developer is to be utterly not responsible for the illegal conduct of users, no matter what those users do and how negligent the AI provider or developer was, period.
Overall conclusion: This is de facto still centrally a preemption framework. Their offer is not quite nothing, there are some good actions here, but in terms of what is being taken away the offer is essentially nothing. They won’t do anything meaningful for most purposes, and you can’t do anything either. That’s the plan.
Those who already supported a ‘moratorium’ banning all state laws are lining up behind the framework as one would expect.
Dean Ball is in support. Both of us are strongly heartened by the same section:
Dean W. Ball: I was especially heartened by this section and heartily concur with the White House that Congress should act to prevent government coercion over the free speech rights of AI developers and users alike.
I strongly agree with Samuel Hammond that if the competition is Blackburn’s monstrosity, this is a rather large improvement.
Neil Chilson calls it a ‘serious framework’ and praises all seven sections.
AI Czar David Sacks of course is excited.
Speaker Mike Johnson and various Congressional leaders are committing to act on the framework, although it’s hard to imagine a world where they didn’t say that.Discuss Read More