AI 2027 came out a year ago, and in reviewing it now, I saw that AI Futures researchers Daniel Kokotajlo, Eli Lifland, and Nikola Jurkovic had updated their AGI timelines to be later over the course of 2025. Then, in 2026, Daniel and Eli updated in the other direction to expect AGI to come sooner.I noticed others with great track records also had made multiple AGI forecasts. A change in the forecast of a single person is meaningful in the way that a change in an aggregate forecast may hide. A change in an aggregate forecast might come entirely from a change in who is forecasting, not what those people individually believe.So I decided to visualize what the net direction of updates were over the last few years. I find this provides a complementary view of AI timelines compared to those by Metaculus, Epoch, AI Digest, and others.So here is a visualization of AGI forecasts. Criteria for inclusion were: the person has made at least 2 forecasts, they gave specific dates, they gave a sense of confidence interval/uncertainty, and their definitions of AGI are similar.Some major caveats. Everyone has different definitions of AGI. (That is a big advantage of everyone forecasting the same question on Metaculus, or the 2025 or 2026 AI forecast survey run by AI Digest.) Often individual people even use different definitions of AGI at different times for their own forecasts. I included data points above if I judged that their definition was substantially similar to:AGI: Most purely cognitive labor is automatable at better quality, speed, and cost than humans.I was pretty generous with this, and it’s very debatable whether e.g. a “superhuman coder” from AI 2027 is AGI in the same way that “99% of remote work can be automated” is AGI. Apologies to those in the visualization who would disagree that the definition they used is similar enough to this and don’t feel like this chart captures their views faithfully.Second caveat, I rounded when forecasts were made to be as if they were made on four dates: <= 2023, early 2025, late 2025, and April 2026. This made the visualization much easier to see. So a further apology to those above if you made a prediction in, say, Aug 2025 but I marked this as “late 2025”.Third caveat, the type of confidence intervals various researchers used also varied substantially. I had to really guess or extrapolate to approximate these as 80% confidence intervals, so a final apology if you don’t think the range you give is fairly characterized as an 80% CI.All caveats aside, what impression does this visualization give? Are reputable AI experts who have made multiple predictions updating the same way that Daniel Kokotajlo and Eli Lifland did, pushing out their timelines in 2025, and pulling them in during 2026?From the visualization, it looks to me that in 2023 and 2024, most people brought their AGI timelines in to be sooner, though with some exceptions like Tamay Besilogru. From 2025 to 2026, joining Daniel and Eli in pushing their timelines out are the Metaculus community, Dario Amodei, and elite forecast Peter Wildeford. In fact, across 2025, only Benjamin Todd brought in his timelines to say AGI would happen sooner.Most notably though, every single person who updated their timelines between January 2026 to April 2026 has moved it their timeline to say AGI is coming sooner, myself included.So I think the data supports the impression I got from the AI 2027 authors. One way I could characterize it is:In the OpenAI/ChatGPT era of 2023-2024, people updated towards AGI coming sooner.In the xAI, Meta, and Gemini era of 2025, people updated towards AGI coming later. In the Anthropic era of 2026, people updated back towards AGI coming sooner.Take from that what you will.Bayesians shouldn’t be able to predict which direction they will update. But seeing the history of other people’s updates is useful information. It does give me intuitions about how I or others may update soon, so I take that as evidence that I should update now.(A similar post is also on the FutureSearch blog, where I plan to update the visualization as more predictions are made, this one here on LW will stay static.)Discuss Read More
A visualization of changing AGI timelines, 2023 – 2026
AI 2027 came out a year ago, and in reviewing it now, I saw that AI Futures researchers Daniel Kokotajlo, Eli Lifland, and Nikola Jurkovic had updated their AGI timelines to be later over the course of 2025. Then, in 2026, Daniel and Eli updated in the other direction to expect AGI to come sooner.I noticed others with great track records also had made multiple AGI forecasts. A change in the forecast of a single person is meaningful in the way that a change in an aggregate forecast may hide. A change in an aggregate forecast might come entirely from a change in who is forecasting, not what those people individually believe.So I decided to visualize what the net direction of updates were over the last few years. I find this provides a complementary view of AI timelines compared to those by Metaculus, Epoch, AI Digest, and others.So here is a visualization of AGI forecasts. Criteria for inclusion were: the person has made at least 2 forecasts, they gave specific dates, they gave a sense of confidence interval/uncertainty, and their definitions of AGI are similar.Some major caveats. Everyone has different definitions of AGI. (That is a big advantage of everyone forecasting the same question on Metaculus, or the 2025 or 2026 AI forecast survey run by AI Digest.) Often individual people even use different definitions of AGI at different times for their own forecasts. I included data points above if I judged that their definition was substantially similar to:AGI: Most purely cognitive labor is automatable at better quality, speed, and cost than humans.I was pretty generous with this, and it’s very debatable whether e.g. a “superhuman coder” from AI 2027 is AGI in the same way that “99% of remote work can be automated” is AGI. Apologies to those in the visualization who would disagree that the definition they used is similar enough to this and don’t feel like this chart captures their views faithfully.Second caveat, I rounded when forecasts were made to be as if they were made on four dates: <= 2023, early 2025, late 2025, and April 2026. This made the visualization much easier to see. So a further apology to those above if you made a prediction in, say, Aug 2025 but I marked this as “late 2025”.Third caveat, the type of confidence intervals various researchers used also varied substantially. I had to really guess or extrapolate to approximate these as 80% confidence intervals, so a final apology if you don’t think the range you give is fairly characterized as an 80% CI.All caveats aside, what impression does this visualization give? Are reputable AI experts who have made multiple predictions updating the same way that Daniel Kokotajlo and Eli Lifland did, pushing out their timelines in 2025, and pulling them in during 2026?From the visualization, it looks to me that in 2023 and 2024, most people brought their AGI timelines in to be sooner, though with some exceptions like Tamay Besilogru. From 2025 to 2026, joining Daniel and Eli in pushing their timelines out are the Metaculus community, Dario Amodei, and elite forecast Peter Wildeford. In fact, across 2025, only Benjamin Todd brought in his timelines to say AGI would happen sooner.Most notably though, every single person who updated their timelines between January 2026 to April 2026 has moved it their timeline to say AGI is coming sooner, myself included.So I think the data supports the impression I got from the AI 2027 authors. One way I could characterize it is:In the OpenAI/ChatGPT era of 2023-2024, people updated towards AGI coming sooner.In the xAI, Meta, and Gemini era of 2025, people updated towards AGI coming later. In the Anthropic era of 2026, people updated back towards AGI coming sooner.Take from that what you will.Bayesians shouldn’t be able to predict which direction they will update. But seeing the history of other people’s updates is useful information. It does give me intuitions about how I or others may update soon, so I take that as evidence that I should update now.(A similar post is also on the FutureSearch blog, where I plan to update the visualization as more predictions are made, this one here on LW will stay static.)Discuss Read More