Opinion

The AI industry is where banking was in 2006. (We’re hiring)

​TL;DR; CeSIA, the French Center for AI Safety is recruiting. French not necessary. Apply by 22 May 2026; Paris or remote in Europe/UK.On August 27, 2005, at an annual symposium in Jackson Hole, Raghuram Rajan, then chief economist of the International Monetary Fund, argued in front of central bank governors and top officials that the innovations of the previous decade in banking had not made the world safer. The financial instruments built over the previous decade, he argued, had become so intricate that even their creators no longer fully understood the risks they carried. Risk had migrated to institutions the supervisory system was not designed to watch. And the people running those institutions were compensated in ways that rewarded short-term performance over long-term stability.The reception was hostile. Lawrence Summers, a former U.S. Treasury Secretary at the time, rose from the audience to attack the paper, calling its premise “slightly Luddite” and “largely misguided,” and warning that the kind of changes Rajan argued for would only reduce the productivity of the financial sector.Three years after Jackson Hole, major banks collapsed, first Bear Stearns, then Lehman Brothers, then Merrill Lynch, then AIG. The post-mortem concluded that the failures had been structural rather than personal. Capital requirements, the share of the bank’s own money it must keep on hand to absorb losses, were calibrated by banks against statistical models of their own design. They were under a voluntary supervision program where supervisors could examine books but had no power to compel changes.  The credit rating agencies, whose grades decided which investors could buy a given financial product and at what price, were paid by the firms whose products it was grading. And the regulatory perimeter was drawn by lobbying.Each of these had been visible, and named, before the crisis. None had been corrected. Twenty years later, the same is happening with AI.At CeSIA, our goal over the next 18 months is to build the institutional capacity to replace voluntary commitments with binding ones: technical operationalisation of risk thresholds, evaluation that does not depend on the AI companies’ goodwill, statutory disclosure, and international coordination so that no individual company has to choose between caution and survival.We initiated and coordinated the Global Call for AI Red Lines, launched at the UN General Assembly by Maria Ressa, signed by 12 Nobel laureates and 11 former heads of state. Our contributions have been incorporated into major documents, for example, the EU’s Code of Practice. We advise decision-makers at the highest levels in France, the EU, the UN, the OECD…The first detailed opinion poll we commissioned with OpinionWay on how French citizens see AI risk found that 8% of French people want to accelerate AI development, while 42% want to pause or significantly slow it. The gap between what citizens want and what their institutions are doing is large, and underexploited.We’re hiring three people to help close this gap. Apply by 22 May 2026. You can work from our Paris office or remotely within the EU or the UK. Head of Policy Analysis. French not needed. Lead what CeSIA writes and publishes on AI policy. Set the analytical agenda, write public analysis and private briefings for decision-makers, and manage a small team of analysts and fellows. We want someone with management experience, and ideally a track record of solid analysis on AI policy or safety.Head of Communications / Communications Lead. Run the communications function: strategy, messaging infrastructure, press relations, crisis comms. Hiring at either Head or Lead level. Fluent in French and English.Operations & Executive Associate. Right hand to the leadership team. Fluent in French and English.If you know someone who’d be a fit, we’d be especially grateful for the introduction. Please email me at felix@cesia.org if you have any questions. I’ll be in the comments too.Discuss ​Read More

​TL;DR; CeSIA, the French Center for AI Safety is recruiting. French not necessary. Apply by 22 May 2026; Paris or remote in Europe/UK.On August 27, 2005, at an annual symposium in Jackson Hole, Raghuram Rajan, then chief economist of the International Monetary Fund, argued in front of central bank governors and top officials that the innovations of the previous decade in banking had not made the world safer. The financial instruments built over the previous decade, he argued, had become so intricate that even their creators no longer fully understood the risks they carried. Risk had migrated to institutions the supervisory system was not designed to watch. And the people running those institutions were compensated in ways that rewarded short-term performance over long-term stability.The reception was hostile. Lawrence Summers, a former U.S. Treasury Secretary at the time, rose from the audience to attack the paper, calling its premise “slightly Luddite” and “largely misguided,” and warning that the kind of changes Rajan argued for would only reduce the productivity of the financial sector.Three years after Jackson Hole, major banks collapsed, first Bear Stearns, then Lehman Brothers, then Merrill Lynch, then AIG. The post-mortem concluded that the failures had been structural rather than personal. Capital requirements, the share of the bank’s own money it must keep on hand to absorb losses, were calibrated by banks against statistical models of their own design. They were under a voluntary supervision program where supervisors could examine books but had no power to compel changes.  The credit rating agencies, whose grades decided which investors could buy a given financial product and at what price, were paid by the firms whose products it was grading. And the regulatory perimeter was drawn by lobbying.Each of these had been visible, and named, before the crisis. None had been corrected. Twenty years later, the same is happening with AI.At CeSIA, our goal over the next 18 months is to build the institutional capacity to replace voluntary commitments with binding ones: technical operationalisation of risk thresholds, evaluation that does not depend on the AI companies’ goodwill, statutory disclosure, and international coordination so that no individual company has to choose between caution and survival.We initiated and coordinated the Global Call for AI Red Lines, launched at the UN General Assembly by Maria Ressa, signed by 12 Nobel laureates and 11 former heads of state. Our contributions have been incorporated into major documents, for example, the EU’s Code of Practice. We advise decision-makers at the highest levels in France, the EU, the UN, the OECD…The first detailed opinion poll we commissioned with OpinionWay on how French citizens see AI risk found that 8% of French people want to accelerate AI development, while 42% want to pause or significantly slow it. The gap between what citizens want and what their institutions are doing is large, and underexploited.We’re hiring three people to help close this gap. Apply by 22 May 2026. You can work from our Paris office or remotely within the EU or the UK. Head of Policy Analysis. French not needed. Lead what CeSIA writes and publishes on AI policy. Set the analytical agenda, write public analysis and private briefings for decision-makers, and manage a small team of analysts and fellows. We want someone with management experience, and ideally a track record of solid analysis on AI policy or safety.Head of Communications / Communications Lead. Run the communications function: strategy, messaging infrastructure, press relations, crisis comms. Hiring at either Head or Lead level. Fluent in French and English.Operations & Executive Associate. Right hand to the leadership team. Fluent in French and English.If you know someone who’d be a fit, we’d be especially grateful for the introduction. Please email me at felix@cesia.org if you have any questions. I’ll be in the comments too.Discuss ​Read More

Leave a Reply

Your email address will not be published. Required fields are marked *