User Tools

Site Tools


ai-predictions

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ai-predictions [2025/01/26 16:39] – [Yann LeCun] brianai-predictions [2025/01/26 20:24] (current) – [Dull Claims] brian
Line 42: Line 42:
   * I say 1%, but because of AI enabling nukes and pandemic, not ASI.   * I say 1%, but because of AI enabling nukes and pandemic, not ASI.
 * [[https://www.metaculus.com/questions/4123/time-between-weak-agi-and-oracle-asi/|When a superhuman AI oracle?]] * [[https://www.metaculus.com/questions/4123/time-between-weak-agi-and-oracle-asi/|When a superhuman AI oracle?]]
-  * Criterion is hopelessly vague: "reliably superhuman performance across virtually all questions of interest".+  * Criterion: "reliably superhuman performance across virtually all questions of interest"
 +  * Date depends on dull "When weak AGI?" question below.
 * [[https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/|When weak AGI?]] * [[https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/|When weak AGI?]]
   * Criteria   * Criteria
Line 104: Line 105:
  
 ===== Eliezar Yudkowsky ===== ===== Eliezar Yudkowsky =====
- 
-==== Sharp Claims ==== 
  
 * [[https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse|2024-02]] "[O]ur current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." * [[https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse|2024-02]] "[O]ur current remaining timeline looks more like five years than 50 years. Could be two years, could be 10."
 * [[https://x.com/ESYudkowsky/status/1739705063768232070|2023-12]] "Default timeline to death from ASI:  Gosh idk could be 20 months could be 15 years, depends on what hits a wall and on unexpected breakthroughs and shortcuts.  People don't realize how absurdly hard it is to set bounds on this, especially on the near side.  I don't think we have a right to claim incredible surprise if we die in January." * [[https://x.com/ESYudkowsky/status/1739705063768232070|2023-12]] "Default timeline to death from ASI:  Gosh idk could be 20 months could be 15 years, depends on what hits a wall and on unexpected breakthroughs and shortcuts.  People don't realize how absurdly hard it is to set bounds on this, especially on the near side.  I don't think we have a right to claim incredible surprise if we die in January."
 * [[https://x.com/ESYudkowsky/status/1718058899537018989|2023-10]] "Who can possibly still imagine a world where a child born today goes to college 17 years later?" * [[https://x.com/ESYudkowsky/status/1718058899537018989|2023-10]] "Who can possibly still imagine a world where a child born today goes to college 17 years later?"
- 
-==== Dull Claims ==== 
- 
-* [[https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/|2023-03]] No falsifiable predictions. "Be willing to destroy a rogue datacenter by airstrike. [..] Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange." 
 * [[https://www.lesswrong.com/posts/ZEgQGAjQm5rTAnGuM/beware-boasting-about-non-existent-forecasting-track-records|2022]] Critique of Yudkowsky's betting record * [[https://www.lesswrong.com/posts/ZEgQGAjQm5rTAnGuM/beware-boasting-about-non-existent-forecasting-track-records|2022]] Critique of Yudkowsky's betting record
-* [[https://www.lesswrong.com/posts/sWLLdG6DWJEy3CH7n/imo-challenge-bet-with-eliezer|2022]] bet with Paul Christiano about AI able to solve "the hardest problem" on the International Math Olympiad by 2025. 
 * [[https://www.econlib.org/archives/2017/01/my_end-of-the-w.html|2017]] Yudkowsky bet Bryan Caplan $200 to Caplan's $100 that AI extincts humanity by 2030. * [[https://www.econlib.org/archives/2017/01/my_end-of-the-w.html|2017]] Yudkowsky bet Bryan Caplan $200 to Caplan's $100 that AI extincts humanity by 2030.
- 
  
 ===== Ajeya Cotra ===== ===== Ajeya Cotra =====
ai-predictions.1737934767.txt.gz · Last modified: 2025/01/26 16:39 by brian

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki