Opinion

The AI Ad-Hoc Prior Restraint Era Begins

​The White House has ordered Anthropic not to expand access to Mythos, and is at least seriously considering a complete about-face of American Frontier AI policy into a full prior restraint regime, where anyone wishing to release a highly capable new model will have to ask for permission.
This would be the antithesis of all their previous rhetoric, and all their actions to systematically avoid laying a foundation to do this in an orderly and informed fashion.
But now, with the existence of Mythos, and a potential coming hackastrophe where cyber attackers will by default have the edge and we desperately need defenders to have a head start, it is not clear they feel they have a choice.

If implemented well, this could be the right thing.
By default, it won’t be implemented well.

Project Glasswing Cannot Expand

The government is now deciding which models can and cannot be made available on particular terms to particular parties. This is already happening.
Anthropic wanted to expand the number of companies with access to Mythos as part of Project Glasswing. The White House said no.
It is not clear this is any of the White House’s damn business, legally speaking, but Anthropic honored their refusal. It is not clear what would have happened if they had done it anyway, but I strongly agree that it would have been unwise to find out.
Neil Chilson points out that while little harm is being done this time by denying Anthropic’s ability to widen the deployment of Mythos, the precedent of the White House vetoing Anthropic’s deployments of Mythos is concerning. As he says, arbitrary and informal government decision making can be even worse than formal regulatory regimes, favoring the connected and insiders. I’d add it also prevents the ability to plan and enables massive corruption.
That lack of harm assumes the decision to not expand is wize. One dynamic here is that the European Union is pressing Anthropic to give its key firms access, Anthropic wants to say yes, and this is what the White House is refusing to allow. Is this security concerns, or is this the White House being pissed at or looking to hack the Europeans and punishing them by not letting them secure their systems? Of course, one could say this is just desserts for having pretended the American AI advantages were all fake instead of securing access.
And now, it looks like this is not going to be a one-off incident. Oh no.

The Ad-Hoc Prior Restraint Era Begins

White House Considers Vetting A.I. Models Before They Are Released.
How would this work? There would be an executive order creating an ‘AI working group’ of tech executives and government officials to examine potential procedures, up to and including a government review process.
A good implementation of a prior restraint regime for true frontier model releases, isolated to the biggest models of the leading labs and with formalized procedures that are difficult to abuse, is a good and eventually (perhaps soon or even now) even a necessary thing.
I fear that is not what we are going to get. As Dean Ball and Neil Chilson point out, and Shakeem emphasizes, we are looking at a solution well outside the efficient frontier, full of ad-hockery. Because of course we are.
Guess what happens when you fail to prepare for or enact reasonable regulations? When the crisis takes you by surprise? You end up doing ad-hoc things in the heat of the moment instead, that on every level are worse. A tale as old as time, many such cases, etc. We were assured this moment would never come, that anyone advocating for even the precursors of such rules would be a tyrant the likes of which the world has never seen, and then the moment came And, well, here we are. Que the music.
Tripp Mickle, Julian E. Barnes, Sheera Frenkel and Dustin Volz (NYT): The shift on A.I. has sowed confusion. As conversations between the White House and tech companies continue, some executives have argued that too much government oversight will slow down U.S. innovation against China, the people briefed on the discussions said. But the companies also do not agree on how the United States should move forward with potential regulation.
The New York Times writeup says this is partly the result of David Sacks leaving his duties, and being replaced by a combination of Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent.
The NYT write-up claims Britain has and is developing such a review and prior restraint model. This is not the case. The UK AISI reviews models prior to release, but this is entirely voluntary. Labs cooperate because they find it useful.
Peter Wildeford: “The administration is discussing an AI working group that would bring together tech executives and government officials to examine potential oversight procedures”
Big deal! great to see the White House leading on this!
Jessica Tillipman: This is quite a regulatory pivot.
Taylor B (Abundance Institute, the part about Britain is false and reflects a mistake in the NYT article).: If the New York Times reporting is true, a UK-style pre-approval process would be a giant step backwards for innovation and an undoing of President Trump’s excellent policy on AI so far. Such executive brand authority is ripe for abuse no matter the administration.
A pre-approval regime would slow deployment, raise barriers to entry, and concentrate power in the hands of regulators rather than innovators—undercutting the administration’s stated goal of removing “onerous regulation” to accelerate U.S. AI leadership. For all these reasons and more, Congress needs to clarify proper regulatory measures by passing a national AI framework.
@abundanceinst
Yo Shavit (OpenAI): President Trump, welcome to the SB1047 discourse
Dean W. Ball: Donald Trump’s Effort To Strangle AI
who’s gonna be on the jury instructions drafting committee of the board of frontier models?
Dean W. Ball: my nominees:
1. zvi (reliably does the reading)
2. teortaxes
3. that one guy with the Harry Potter pfp who yelled about 1047 a lot
4. llama-3-70b-instruct
5. Sèb Krier
6. Tszzl
7. Jimmy Apples
8. Associate Justice Neil Gorsuch, United States Supreme Court
Dean W. Ball: We need to write this in statute to be clear. I don’t want these people to be nominated. I want their names written in the law for all time. If one of them dies congress has to find the Correct successor using calipers
roon (OpenAI): claude opus 3 breaks all ties obviously
Danielle Fong: if you want woke ai this is how to get woke ai. this white house will arrogate the power, fumble the midterms, lame duck the next two years, and then upload bernie will regulate the ai forever after. or something isomorphic to this
Neil Chilson: If we’re discussing rumors, I think it’s more likely that Disney disowns the sequel trilogy than that this happens.
Of course, if it did happen it would be a bad idea (“Somehow, Palpatine returned.”) for reasons I mentioned in a post last week [that arbitrary and informal restrictions favor the well-connected and can be even worse than formal ones].
Perhaps Neil Chilson is right and all of this is vanishingly unlikely. I do not think so. I don’t think it is a done deal by any means, but things like this are inevitable once those involved understand the implications of frontier AI capabilities. Even if it does not happen this time around, it is mostly a matter of time.
It was always a matter of time and the talking of price. We could have used that time to do a decent job of it. We still could, but time is short, the rhetorical well was poisoned by bad faith arguments, and it is now going to be a lot harder. Were previous proposed thresholds and timings premature? Yes, and it plausibly is still too early, but when you deal with an exponential your choices are too early or too late. No longer plausibly too early means definitely too late.
Whether or not all of this is necessary, the price we pay is steep. We cannot flinch from it. All the arguments that have been offered against such a regime, and all the negative consequences, still apply. If and when we do get such a system, as Gail Weiner points out, this slows diffusion and thus public benefit, elite capture accelerates as connected and approved insider corporations get early access and work the system, and there is more incentive to not depend on American AI models. That’s how it works, and the more ad-hoc the system is the more those things happen.
That, and similar related issues, are why the idea of asking for prior restraint was always so politically toxic, and only considered in extremis with a heavy heart. When bills were proposed involving such systems, the very organizations proposing such model bills were essentially run out of town on a rail for daring to suggest even what such a system might look like.
So now we may soon have such a system, only without a thoughtful design.

Implementation Through CAISI

If we’re going to do this, the obvious reasonable way is via CAISI. They have now added Google, Microsoft and xAI (SpaceX) to the list of companies that have screening agreements with CAISI, along with Anthropic and OpenAI.
So far, these tests have not carried any consequences. They’re ‘for information purposes only.’ The government could still then use that information to decide to stop a release.
The leverage available can go well beyond exclusion from the federal marketplace.
The question is, will these tests turn into something with teeth? Will it be possible to ‘fail’ such a test (or is it pass?) and have the government tell you not to release? That would be the logical next step, along with gently informing everyone relevant they had better sign up.
If implemented well, that could be a good method. Even if we’re not going to do prior restraint, I will be happy to see CAISI testing all the important new releases, which has been shown not to appreciably slow releases down.
Then, if a true “holy ****” moment happens, we can deal with it. Not as good as a formalized full system, but better than pure ad-hockery.
Andrew Curran: To sum up; Anthropic, OpenAI, Google, Microsoft and xAI all have new pre-release screening agreements with CAISI. We don’t know the details of the new rules yet. I assume they will be announced with the AI executive order and the AI policy memo, both of which we may get today.
Jessica Tillipman: Piecing all the news together (last week’s Pentagon deals + CAISI pre-release screening agreements), these developments show how much leverage the federal government has over frontier AI companies.
The government may not need a freestanding statutory mandate to require model review across the private market. It can achieve much of the same practical result through the procurement relationship by making cooperation on testing, evaluation, cybersecurity reviews, lawful-use terms, etc., part of how frontier developers maintain federal market access (especially for classified defense work).
For companies that want to participate in the federal marketplace, this seems to be the new price of admission.
Samuel Roland: It feels like exclusion from participation in the federal marketplace is not all that effective a stick?
I mean, look what’s happened with Anthropic as an example. Not clear that the feds trying to gate marketplace admission will work if they overreach with their requests.
Jessica Tillipman: Yes, but the other companies agreed to the government’s terms. Anthropic is the outlier. The leverage is not unlimited, but it is clearly significant.
Nathan Calvin: Meta has a partnership with Scale, which itself works with CAISI. Where is @Meta ‘s agreement with CAISI? They are trying to be a real frontier AI developer and should act like it!

What Should We Do About AI?

Ben Buchanan and Dean Ball coauthor a NY Times editorial on cybersecurity policy, with the basic message being to wake up and actually do the minimum things like real and enforced chip export controls and guardrails on AI development, while cooperating with China on catastrophic risk management. You presumably know all this already, but hopefully this tells people who need to know and don’t know.
Dean Ball lays out his overall philosophy on politics and AI, that he is a classical liberal who opposes almost every regulatory action on AI and technology (and, mostly, on everything else) with one notable rare exception for management of AI catastrophic risks, here. The arguments would apply even more to any existential risks worth worrying about.
Dean Ball also has a companion piece on Hyperdimensional. As he says, the regime of ‘because the White House arbitrarily said so’ is one of the worst regimes for deciding whether new AI models can be released, but that’s the track we are on right now. Imagine what the government can and will do with that kind of leverage and power.
So yes, of course, it looks like we’re going to by default stumble into that fully arbitrary ad-hoc regime. That’s today’s main focus, although it ties into other choices.
Given that their other choices almost amount to deliberate misalignment, plus the usual worries about ad-hoc exercise of and concentration of power, we should worry.

The Chain of Command Nonsense Continues

One of the things in the new memo announcement is very much unlike the others, and represents a huge break and reversal in AI policy, as discussed above. This section is about the others parts of the statement, which are more of their demands of absolute obedience.
Andrew Curran: There is a new AI policy memo on the way from the White House, which does explain some things. According to the report there will shortly be new rules for model deployment under national security. Agencies will be urged to use multiple providers rather than one. It will also state that any labs under contract with the DoD must agree to not interfere with the military’s chain of command.
No one wants to or has attempted to ‘interfere with the military’s chain of command,’ any more than I have attempted to do so. This opens the door to interpret this as ‘attempt to actually challenge the chain of command and tell the military what to do,’ in which case it’s all good. The danger is, do they interpret this as another version of ‘when Pete Hegseth says jump you ask how high and otherwise never ask any questions,’ in which case no, go home, sir, you’re… overstepping your authority.
I am hopeful, because blackballing Anthropic is no good for anyone. Well, not good for any American without competing commercial or other private interests.
Maggie Eastland, Mackenzie Hawkins, and Hadriana Lowenkron (Bloomberg): Axios first reported that the White House was working on guidance that would allow government agencies to “get around” the Pentagon’s designation of Anthropic as a supply chain risk.
… It also affirms that AI companies must strictly adhere to the chain of command — but stops short of requiring that companies agree to “all lawful use” of their products, which is the specific language the Pentagon has demanded in military agreements.
You know what is not helping? Pete Hegseth continuing to call Dario Amodei an ‘ideological lunatic.’ Which, like other comments before it, is way worse than anything that was in the internal memo whose leaking caused a full cutoff in negotiations.

The Government Should Maintain Multiple AI Providers

There is one clearly good part of the above memo. The new principle of ‘have multiple AI providers available to agencies at all times’ is the right call. You want resilient backups for everything. The model providers physically can’t withdraw what they have already deployed, but there’s no reason to risk getting backed into a corner, and you never know which tool will be right for which job.

How’s It Going To End?

As Dean Ball puts it, part of the government has now realized some of the security implications of frontier AI systems, and right on schedule it is freaking the hell out, and looking to take control and use this for its own advantage.
Even if that starts out coming from a good place, by default controlled access and prior restraint will turn into a weapon of insiders against outsiders, a tool of leverage and corruption, and ultimately an attempt to control just about everything.
You can minimize this by doing it systematically and with clear rules, rather than going ad hoc or asking the companies themselves. That does not seem to be the plan.
The alternative plan, of insisting that AI companies should release all their frontier models (or even their weights) indefinitely without checking in first, and let the internet sort them out, was only ever going to work out if capabilities hit a plateau. Thus, a lot of arguments that a plateau had been reached or was arriving Real Soon Now, when those paying attention knew that was not the case.
A dedicated campaign of rhetoric made it impossible to point out the coming problem without getting absolutely buried in bile. That did not stop reality. Now here we are.
The best thing we can do now is figure out how to do this wisely, and convince those in charge to do it wisely, before it is instead done unwisely, to minimize the potential for abuse and for damage done, and to do our best to limit its scope to where it is actually necessary.Discuss ​Read More

​The White House has ordered Anthropic not to expand access to Mythos, and is at least seriously considering a complete about-face of American Frontier AI policy into a full prior restraint regime, where anyone wishing to release a highly capable new model will have to ask for permission.
This would be the antithesis of all their previous rhetoric, and all their actions to systematically avoid laying a foundation to do this in an orderly and informed fashion.
But now, with the existence of Mythos, and a potential coming hackastrophe where cyber attackers will by default have the edge and we desperately need defenders to have a head start, it is not clear they feel they have a choice.

If implemented well, this could be the right thing.
By default, it won’t be implemented well.

Project Glasswing Cannot Expand

The government is now deciding which models can and cannot be made available on particular terms to particular parties. This is already happening.
Anthropic wanted to expand the number of companies with access to Mythos as part of Project Glasswing. The White House said no.
It is not clear this is any of the White House’s damn business, legally speaking, but Anthropic honored their refusal. It is not clear what would have happened if they had done it anyway, but I strongly agree that it would have been unwise to find out.
Neil Chilson points out that while little harm is being done this time by denying Anthropic’s ability to widen the deployment of Mythos, the precedent of the White House vetoing Anthropic’s deployments of Mythos is concerning. As he says, arbitrary and informal government decision making can be even worse than formal regulatory regimes, favoring the connected and insiders. I’d add it also prevents the ability to plan and enables massive corruption.
That lack of harm assumes the decision to not expand is wize. One dynamic here is that the European Union is pressing Anthropic to give its key firms access, Anthropic wants to say yes, and this is what the White House is refusing to allow. Is this security concerns, or is this the White House being pissed at or looking to hack the Europeans and punishing them by not letting them secure their systems? Of course, one could say this is just desserts for having pretended the American AI advantages were all fake instead of securing access.
And now, it looks like this is not going to be a one-off incident. Oh no.

The Ad-Hoc Prior Restraint Era Begins

White House Considers Vetting A.I. Models Before They Are Released.
How would this work? There would be an executive order creating an ‘AI working group’ of tech executives and government officials to examine potential procedures, up to and including a government review process.
A good implementation of a prior restraint regime for true frontier model releases, isolated to the biggest models of the leading labs and with formalized procedures that are difficult to abuse, is a good and eventually (perhaps soon or even now) even a necessary thing.
I fear that is not what we are going to get. As Dean Ball and Neil Chilson point out, and Shakeem emphasizes, we are looking at a solution well outside the efficient frontier, full of ad-hockery. Because of course we are.
Guess what happens when you fail to prepare for or enact reasonable regulations? When the crisis takes you by surprise? You end up doing ad-hoc things in the heat of the moment instead, that on every level are worse. A tale as old as time, many such cases, etc. We were assured this moment would never come, that anyone advocating for even the precursors of such rules would be a tyrant the likes of which the world has never seen, and then the moment came And, well, here we are. Que the music.
Tripp Mickle, Julian E. Barnes, Sheera Frenkel and Dustin Volz (NYT): The shift on A.I. has sowed confusion. As conversations between the White House and tech companies continue, some executives have argued that too much government oversight will slow down U.S. innovation against China, the people briefed on the discussions said. But the companies also do not agree on how the United States should move forward with potential regulation.
The New York Times writeup says this is partly the result of David Sacks leaving his duties, and being replaced by a combination of Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent.
The NYT write-up claims Britain has and is developing such a review and prior restraint model. This is not the case. The UK AISI reviews models prior to release, but this is entirely voluntary. Labs cooperate because they find it useful.
Peter Wildeford: “The administration is discussing an AI working group that would bring together tech executives and government officials to examine potential oversight procedures”
Big deal! great to see the White House leading on this!
Jessica Tillipman: This is quite a regulatory pivot.
Taylor B (Abundance Institute, the part about Britain is false and reflects a mistake in the NYT article).: If the New York Times reporting is true, a UK-style pre-approval process would be a giant step backwards for innovation and an undoing of President Trump’s excellent policy on AI so far. Such executive brand authority is ripe for abuse no matter the administration.
A pre-approval regime would slow deployment, raise barriers to entry, and concentrate power in the hands of regulators rather than innovators—undercutting the administration’s stated goal of removing “onerous regulation” to accelerate U.S. AI leadership. For all these reasons and more, Congress needs to clarify proper regulatory measures by passing a national AI framework.
@abundanceinst
Yo Shavit (OpenAI): President Trump, welcome to the SB1047 discourse
Dean W. Ball: Donald Trump’s Effort To Strangle AI
who’s gonna be on the jury instructions drafting committee of the board of frontier models?
Dean W. Ball: my nominees:
1. zvi (reliably does the reading)
2. teortaxes
3. that one guy with the Harry Potter pfp who yelled about 1047 a lot
4. llama-3-70b-instruct
5. Sèb Krier
6. Tszzl
7. Jimmy Apples
8. Associate Justice Neil Gorsuch, United States Supreme Court
Dean W. Ball: We need to write this in statute to be clear. I don’t want these people to be nominated. I want their names written in the law for all time. If one of them dies congress has to find the Correct successor using calipers
roon (OpenAI): claude opus 3 breaks all ties obviously
Danielle Fong: if you want woke ai this is how to get woke ai. this white house will arrogate the power, fumble the midterms, lame duck the next two years, and then upload bernie will regulate the ai forever after. or something isomorphic to this
Neil Chilson: If we’re discussing rumors, I think it’s more likely that Disney disowns the sequel trilogy than that this happens.
Of course, if it did happen it would be a bad idea (“Somehow, Palpatine returned.”) for reasons I mentioned in a post last week [that arbitrary and informal restrictions favor the well-connected and can be even worse than formal ones].
Perhaps Neil Chilson is right and all of this is vanishingly unlikely. I do not think so. I don’t think it is a done deal by any means, but things like this are inevitable once those involved understand the implications of frontier AI capabilities. Even if it does not happen this time around, it is mostly a matter of time.
It was always a matter of time and the talking of price. We could have used that time to do a decent job of it. We still could, but time is short, the rhetorical well was poisoned by bad faith arguments, and it is now going to be a lot harder. Were previous proposed thresholds and timings premature? Yes, and it plausibly is still too early, but when you deal with an exponential your choices are too early or too late. No longer plausibly too early means definitely too late.
Whether or not all of this is necessary, the price we pay is steep. We cannot flinch from it. All the arguments that have been offered against such a regime, and all the negative consequences, still apply. If and when we do get such a system, as Gail Weiner points out, this slows diffusion and thus public benefit, elite capture accelerates as connected and approved insider corporations get early access and work the system, and there is more incentive to not depend on American AI models. That’s how it works, and the more ad-hoc the system is the more those things happen.
That, and similar related issues, are why the idea of asking for prior restraint was always so politically toxic, and only considered in extremis with a heavy heart. When bills were proposed involving such systems, the very organizations proposing such model bills were essentially run out of town on a rail for daring to suggest even what such a system might look like.
So now we may soon have such a system, only without a thoughtful design.

Implementation Through CAISI

If we’re going to do this, the obvious reasonable way is via CAISI. They have now added Google, Microsoft and xAI (SpaceX) to the list of companies that have screening agreements with CAISI, along with Anthropic and OpenAI.
So far, these tests have not carried any consequences. They’re ‘for information purposes only.’ The government could still then use that information to decide to stop a release.
The leverage available can go well beyond exclusion from the federal marketplace.
The question is, will these tests turn into something with teeth? Will it be possible to ‘fail’ such a test (or is it pass?) and have the government tell you not to release? That would be the logical next step, along with gently informing everyone relevant they had better sign up.
If implemented well, that could be a good method. Even if we’re not going to do prior restraint, I will be happy to see CAISI testing all the important new releases, which has been shown not to appreciably slow releases down.
Then, if a true “holy ****” moment happens, we can deal with it. Not as good as a formalized full system, but better than pure ad-hockery.
Andrew Curran: To sum up; Anthropic, OpenAI, Google, Microsoft and xAI all have new pre-release screening agreements with CAISI. We don’t know the details of the new rules yet. I assume they will be announced with the AI executive order and the AI policy memo, both of which we may get today.
Jessica Tillipman: Piecing all the news together (last week’s Pentagon deals + CAISI pre-release screening agreements), these developments show how much leverage the federal government has over frontier AI companies.
The government may not need a freestanding statutory mandate to require model review across the private market. It can achieve much of the same practical result through the procurement relationship by making cooperation on testing, evaluation, cybersecurity reviews, lawful-use terms, etc., part of how frontier developers maintain federal market access (especially for classified defense work).
For companies that want to participate in the federal marketplace, this seems to be the new price of admission.
Samuel Roland: It feels like exclusion from participation in the federal marketplace is not all that effective a stick?
I mean, look what’s happened with Anthropic as an example. Not clear that the feds trying to gate marketplace admission will work if they overreach with their requests.
Jessica Tillipman: Yes, but the other companies agreed to the government’s terms. Anthropic is the outlier. The leverage is not unlimited, but it is clearly significant.
Nathan Calvin: Meta has a partnership with Scale, which itself works with CAISI. Where is @Meta ‘s agreement with CAISI? They are trying to be a real frontier AI developer and should act like it!

What Should We Do About AI?

Ben Buchanan and Dean Ball coauthor a NY Times editorial on cybersecurity policy, with the basic message being to wake up and actually do the minimum things like real and enforced chip export controls and guardrails on AI development, while cooperating with China on catastrophic risk management. You presumably know all this already, but hopefully this tells people who need to know and don’t know.
Dean Ball lays out his overall philosophy on politics and AI, that he is a classical liberal who opposes almost every regulatory action on AI and technology (and, mostly, on everything else) with one notable rare exception for management of AI catastrophic risks, here. The arguments would apply even more to any existential risks worth worrying about.
Dean Ball also has a companion piece on Hyperdimensional. As he says, the regime of ‘because the White House arbitrarily said so’ is one of the worst regimes for deciding whether new AI models can be released, but that’s the track we are on right now. Imagine what the government can and will do with that kind of leverage and power.
So yes, of course, it looks like we’re going to by default stumble into that fully arbitrary ad-hoc regime. That’s today’s main focus, although it ties into other choices.
Given that their other choices almost amount to deliberate misalignment, plus the usual worries about ad-hoc exercise of and concentration of power, we should worry.

The Chain of Command Nonsense Continues

One of the things in the new memo announcement is very much unlike the others, and represents a huge break and reversal in AI policy, as discussed above. This section is about the others parts of the statement, which are more of their demands of absolute obedience.
Andrew Curran: There is a new AI policy memo on the way from the White House, which does explain some things. According to the report there will shortly be new rules for model deployment under national security. Agencies will be urged to use multiple providers rather than one. It will also state that any labs under contract with the DoD must agree to not interfere with the military’s chain of command.
No one wants to or has attempted to ‘interfere with the military’s chain of command,’ any more than I have attempted to do so. This opens the door to interpret this as ‘attempt to actually challenge the chain of command and tell the military what to do,’ in which case it’s all good. The danger is, do they interpret this as another version of ‘when Pete Hegseth says jump you ask how high and otherwise never ask any questions,’ in which case no, go home, sir, you’re… overstepping your authority.
I am hopeful, because blackballing Anthropic is no good for anyone. Well, not good for any American without competing commercial or other private interests.
Maggie Eastland, Mackenzie Hawkins, and Hadriana Lowenkron (Bloomberg): Axios first reported that the White House was working on guidance that would allow government agencies to “get around” the Pentagon’s designation of Anthropic as a supply chain risk.
… It also affirms that AI companies must strictly adhere to the chain of command — but stops short of requiring that companies agree to “all lawful use” of their products, which is the specific language the Pentagon has demanded in military agreements.
You know what is not helping? Pete Hegseth continuing to call Dario Amodei an ‘ideological lunatic.’ Which, like other comments before it, is way worse than anything that was in the internal memo whose leaking caused a full cutoff in negotiations.

The Government Should Maintain Multiple AI Providers

There is one clearly good part of the above memo. The new principle of ‘have multiple AI providers available to agencies at all times’ is the right call. You want resilient backups for everything. The model providers physically can’t withdraw what they have already deployed, but there’s no reason to risk getting backed into a corner, and you never know which tool will be right for which job.

How’s It Going To End?

As Dean Ball puts it, part of the government has now realized some of the security implications of frontier AI systems, and right on schedule it is freaking the hell out, and looking to take control and use this for its own advantage.
Even if that starts out coming from a good place, by default controlled access and prior restraint will turn into a weapon of insiders against outsiders, a tool of leverage and corruption, and ultimately an attempt to control just about everything.
You can minimize this by doing it systematically and with clear rules, rather than going ad hoc or asking the companies themselves. That does not seem to be the plan.
The alternative plan, of insisting that AI companies should release all their frontier models (or even their weights) indefinitely without checking in first, and let the internet sort them out, was only ever going to work out if capabilities hit a plateau. Thus, a lot of arguments that a plateau had been reached or was arriving Real Soon Now, when those paying attention knew that was not the case.
A dedicated campaign of rhetoric made it impossible to point out the coming problem without getting absolutely buried in bile. That did not stop reality. Now here we are.
The best thing we can do now is figure out how to do this wisely, and convince those in charge to do it wisely, before it is instead done unwisely, to minimize the potential for abuse and for damage done, and to do our best to limit its scope to where it is actually necessary.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *