Forecasts
This page shows our timelines and takeoff forecasts. We are highly uncertain about this, and have expressed our uncertainty as a probability distribution over the possible times when each milestone might be reached. We show the raw result of a Monte Carlo simulation of our model, as well as our subjective all-things-considered probability distributions. We plan to keep this page up to date as our predictions change.
What do we mean by "all-things-considered"?
Though we view the model's outputs as an important source of evidence about what future AI progress might look like, we don't blindly trust it. Our all-things-considered views are informed by looking at the results of the model but then making adjustments based on intuition and which factors the model doesn't include.
How have our forecasts changed since publishing the AI Futures Model?
The forecast dropdown below shows the history of how our views have changed since publication. Here is a summary of the changes:
- 2025 Dec 31: Fix infrequent bug with determining when TED-AI and ASI are achieved. Fixed an issue with the thresholds for TED-AI and ASI in the case where research taste at AC is better than the best human, i.e. >3.09 SDs. This affects a low percentage of our Monte Carlo simulations: 13% of Eli's, and 9% of Daniel's. This very slightly increases takeoff speeds: for example it increases P(AC->ASI < 1 year) from 26.3 to 27.1% for Eli, and from 36.7% to 37.1% for Daniel. It increases P(AC->ASI < 10 years) from 58.3% to 59.3% for Eli, and from 71.5% to 71.8% for Daniel.
- 2026 Jan 26: Fix minor bug in the model code, add all-things-considered forecasts for more quantities, minor updates to Eli's AC timelines and AC to ASI takeoff speeds. Fixed a numerical underflow causing incorrect coding labor calculation for certain values of the “Coding Labor Parallelization Penalty (λ)” and “Coding Automation Efficiency Improvement Factor (η)” parameters. This was formerly affecting approximately 3% of our Monte Carlo rollouts, and fixing the issue had a negligible effect on the outcome distributions. Daniel and Eli also added all-things-considered forecasts for more quantities: both added forecasts for SC, TED-AI, and ASI. Daniel also added a forecast for SAR. Eli also added a forecast for the time from AC to TED-AI. Eli also updated his all-things-considered forecasts for AC timelines and AC to ASI takeoff. He made his AC arrival median Mar 2032 instead of Jul 2032 and increased the probability of AC arrival in 2026 from 4% to 6%. These changes were due to paying a bit more respect to an Anthropic-style worldview in which we're close to AC, in part informed by Claude Code's impressiveness. He also increased the chance of fast takeoffs from AC to ASI, giving 25% to takeoff in <0.5 years as opposed to 18% before. But he also increased uncertainty at the upper end, decreasing the chance of takeoff in <10 years from 85% to 83%. These changes were due to reflecting more that perhaps the distribution should be more spread out than the model's due to outside-of-model factors.
Milestone arrival dates
The chart below shows how long we project it will take to achieve various AI milestones (toggle them on in the sidebar). The x-axis is the year the milestone is achieved, and the y-axis is the probability density at a point in time, expressed in the % chance the milestone would happen within a year at that density level.
Chart Settings
ATC shown as dashed lines
Probability densities are estimated based on 10,000 simulated trajectories.
Eli's notes on their all-things-considered forecast
To adjust for factors outside of the model, I lengthen my AC timelines median from late 2030 to early 2032, driven primarily by unknown model limitations and mistakes and the potential for data bottlenecks that we aren't modeling. In summary:
- Unknown model limitations and mistakes. With our previous (AI 2027) timelines model, my instinct was to push my overall forecasts longer due to unknown unknowns, and I'm glad I did. My median for SC was 2030 as opposed to the model's output of Dec 2028, and I now think that the former looks more right. I again want to lengthen my overall forecasts for this reason, but less so because our new model is much more well-tested and well-considered than our previous one, and is thus less likely to have simple bugs or unknown simple conceptual issues.
- Data bottlenecks. Our model implicitly assumes now that any data progress is proportional to algorithmic progress. But data in practice could be either more or less bottlenecking. My guess is that modeling data would lengthen timelines a bit, at least in cases where synthetic data is tough to fully rely upon.
I also increase the 90th percentile from 2062 to 2125. But I actually give higher probability to getting AC in 2026 than the model (6% instead of 4%), driven by heuristics about outside-of-model factors adding uncerainty, and putting a little weight on an Anthropic-style worldview in which we are quite close to AC. You can see all of the adjustments that I considered in this supplement.
Time from coding automation to future milestones
The chart below shows how long we project it will take to reach various milestones after achieving AC (Automated Coder). The x-axis represents years after AC achievement, and the curves show the cumulative probability for when each subsequent milestone might be reached.
Chart Settings
ATC shown as dashed lines
Eli's notes on their all-things-considered forecast
To get my all-things-considered views I: increase the chance of fast takeoff a little (I change AC to ASI in <0.5 years from 19% to 25%), and further decrease the chance of long takeoffs, e.g. changing the chance of AC to ASI in <10 years from 59% to 83%.
The biggest reasons I make takeoff a bit faster are:
- Automation of hardware R&D, hardware production, and general economic automation. We aren't modeling these, and while they have longer lead times than software R&D, they could make a substantial difference especially in multi-year takeoffs.
- Shifting to research directions which are less compute bottlenecked might speed up takeoff, and isn't modeled. Once AI projects have vast amounts of labor, they can focus on research which loads more heavily on labor relative to experiment compute than current research.
The former issue leads me to make a sizable adjustment to the tail of my distribution. I think modeling hardware and economic automation would make it more likely that if there isn't taste-only singularity, we still get to ASI within 2-10 years.
I think that, as with timelines, for takeoff unknown limitations and mistakes in expectation point towards things going slower. But unlike with timelines, there are counter-considerations that I think are stronger. You can see all of the adjustments that I considered in this supplement.
In our results analysis, we analyze which parameters are most important for the above forecasts. We also examine the correlation in our model between short timelines and fast takeoffs.