User Tools

Site Tools


ai-predictions

**This is an old revision of the document!**

AI Predictions

Metaculus

Sharp Claims

    • Options: Futurama, AI-Fizzle, AI-Dystopia, Singularia, Paperclipalypse
    • This is the best all-around AI prediction question, despite possible subjectivity among the 3 non-ASI options.
    • The 2 ASI futures currently total 21%, but I say 0.3%.
    • I say: Futurama 51%, AI-Fizzle 41%, AI-Dystopia 7%
    • Community prediction of 2041 is way too early, I say 50% by 2100.
    • e.g. 19% annual growth for 4 straight years
    • Community says 34%, presumably based on AI.
    • I say 5%, based mostly on possible catch-up growth from global free-market reforms.
    • Criterion: “can perform any task humans can perform in 2021, as well or superior to the best humans in e.g. sports, preparing and serving food, psychotherapy, discovering scientific insights which could win 2021 Nobel prizes, creating original art and entertainment, and having professional-level software design and AI design capabilities”
    • Community prediction of 44 months after “weak AGI” (see below) is laughably short
    • I say 50% chance it needs >50yr after “weak AGI”.
    • A major drop in labor force participation (LFP) might be a good leading indicator of AGI, but 10% is too low.
    • Higher thresholds might be better indicators, but could be confounded by leisure/labor tradeoffs.
    • Community forecast of 2123 is inconsistent with ASI and GWP predictions above.

Dull Claims

    • This is really just a sociological question of when the Nobel committee will decide to signal its AI-friendliness by awarding the Literature prize to some AI-assisted literature project.
    • Thus community prediction of 2059 is way too late. I guess 2035.
    • Question should be: when will one of the three Nobel Prizes in natural sciences, or a Fields Medal, be awarded to an AI system that receives no more human supervision than human winners typically get? I say 2050.
    • Criteria: adversarial Turing test, assemble a toy car model, and get 90% on subject-matter and programming evals.
    • The robotics eval will gate this. It will probably be demonstrated in time for the community prediction of 2032. But a better robotics eval would be: able to cost-effectively replace a human maid. I predict 2035 for that.
    • Even with the robotics eval, this challenge does not come remotely close to true AGI, best defined as: autonomously perform most economically-valuable c2020 human cognitive work as cost-effectively as humans do.
    • The underlying When-AGI question (above) is dull, so this question is just about odds of 95% depopulation in 1st 25yr after imminent weak (quizbot + robo-hands) AGI.
    • 95% depopulation is a dull question compared to extinction.
    • Community predicts >30% chance of 95% depopulation by 2065.
    • I say 1%, but because of AI enabling nukes and pandemic, not ASI.
    • Criterion is hopelessly vague: “reliably superhuman performance across virtually all questions of interest”.
    • Criteria
      • A text-only Turing Test, using judges and confederates of unspecified competence.
      • Human level on Winograd schema (already achieved c.2019)
      • 75th percentile on math SAT
      • Explore all 24 rooms of Atari game “Montezuma's revenge” in <100 hours of game play.
    • The Turing test is under-specified and the gaming criterion is non-general.

Yann LeCun

  • 2024-12
    • “To have possibly a system that at least to most people feels like it has similar[sic?] intelligence as humans [..] I don't see this happening in less than 5 or 6 years.”

    * 2024-10

    • “If this project is crowned with success, we will perhaps have architectures in 7.5 years that can perhaps reach the level of human intelligence in 7.5 years. Mark Zuckerberg likes to hear me say that but I can't promise anything.”
    • “There’s a whole lot of problems that will absolutely pop up, so AGI might take 50 years, it might take 100 years, I’m not too sure.”

Andrej Karpathy

  • 2025-01 “2025-2035 is the decade of agents. [..] Tomorrow, you’ll spin up organizations of Operators for long-running tasks of your choice (eg running a whole company). You could be a kind of CEO monitoring 10 of them at once, maybe dropping in to the trenches sometimes to unblock something”

Shane Legg

    • “By what year would you assign a 10%/50%/90% chance of human-level machine intelligence? 2018, 2028, 2050”
  • 2025-01 “this now means a 50% chance of AGI in the next 3 years!”

Dario Amodei

  • 2024-10 “Powerful AI could come as early as 2026, though there are also ways it could take much longer.”
    • “is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc”
    • “skill exceeding that of the most capable humans in the world”
    • “can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary”
    • “The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. We could summarize this as a 'country of geniuses in a datacenter'.”
    • No singularity: “there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits.”
    • “AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years.”

Gary Marcus

  • 20025-01 25 predictions on where AI will be at the end of 2025 Mostly short-term or tactical.
    • “Less than 10% of the work force will be replaced by AI. Probably less than 5%.” Definitely less than 1%.
  • 2024-04 9 Things AI Won't Do In 2025 All safe bets for 2025, but these goals will mostly be met over the next 15 years.
    • Understand a movie: will be able to extract the script and understand it, but not full watching until 2027
    • Write biographies/briefs without hallucinations: will get inexorably better over the next few years
    • Pulitzer/Oscar-caliber writing: by 2030 the gating factor will be judges' acceptance
    • Drive a car or bike off-road: will get inexorably better over the next few years
    • Cook and clean: robot cooks/maids will come 2030-2035
    • Care-giving: robots will be increasingly trusted for this 2035-2040
    • Nobel-level discoveries: AI collaboration will expand, but no fully autonomous Nobel science until 2040-2060
  • 2022-05 5 Things AI Won't Do In 2029 Marcus could quibble, but I think will be arguably wrong on essentially all 5:
    • Understand a movie or novel: won't be a problem
    • Work as a kitchen cook: won't quite yet be practical, but the demos will be impressive and the goal in sight
    • Reliably write 10Kl of bug-free code from English spec or via non-expert guidance: yes, but “bug-free” requires the spec/guidance to be so detailed as to be expert-level
    • Ingest arbitrary natural-language math proofs for symbolic verification: yes, unless “arbitrary” is read as “any and every”

Eliezar Yudkowsky

Sharp Claims

  • 2024-02 “[O]ur current remaining timeline looks more like five years than 50 years. Could be two years, could be 10.”
  • 2023-12 “Default timeline to death from ASI: Gosh idk could be 20 months could be 15 years, depends on what hits a wall and on unexpected breakthroughs and shortcuts. People don't realize how absurdly hard it is to set bounds on this, especially on the near side. I don't think we have a right to claim incredible surprise if we die in January.”
  • 2023-10 “Who can possibly still imagine a world where a child born today goes to college 17 years later?”

Dull Claims

  • 2023-03 No falsifiable predictions. “Be willing to destroy a rogue datacenter by airstrike. [..] Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.”
  • 2022 Critique of Yudkowsky's betting record
  • 2022 bet with Paul Christiano about AI able to solve “the hardest problem” on the International Math Olympiad by 2025.
  • 2017 Yudkowsky bet Bryan Caplan $200 to Caplan's $100 that AI extincts humanity by 2030.

Ajeya Cotra

  • 2020 Draft Report on Biological Anchors

Robin Hanson

  • 2008 “AIs that can parse and use CYC should be feasible well before AIs that can parse and use random human writings.”Default timeline to death from ASI: Gosh idk could be 20 months could be 15 years, depends on what hits a wall and on unexpected breakthroughs and shortcuts. People don't realize how absurdly hard it is to set bounds on this, especially on the near side. I don't think we have a right to claim incredible surprise if we die in January.“
  • 2008 “The idea that you could create human level intelligence by just feeding raw data into the right math-inspired architecture is pure fantasy. You couldn’t build an effective cell or ecosystem or developed economy or most any complex system that way either – such things require not just good structure but also lots of good content.”

Paul Christiano

Connor Leahy

  • 2024-11ControlAI's Narrow Path are the types of policies that would need to be implemented to actually you know not die from AGI in the next couple of years.[..] But I don't expect we'll be able to solve all these problems before deadline, since deadline currently is in the next couple of years.“

Bryan Caplan

    • Hanson: “what's the special things that you think that that will forever remain outside their abilities?”
    • Caplan: “robots will not be good at coming up with an intellectually original argument, or original music”
ai-predictions.1737933203.txt.gz · Last modified: 2025/01/26 16:13 by brian

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki