User Tools

Site Tools


ai-predictions

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ai-predictions [2025/01/26 15:56] brianai-predictions [2025/01/26 20:24] (current) – [Dull Claims] brian
Line 42: Line 42:
   * I say 1%, but because of AI enabling nukes and pandemic, not ASI.   * I say 1%, but because of AI enabling nukes and pandemic, not ASI.
 * [[https://www.metaculus.com/questions/4123/time-between-weak-agi-and-oracle-asi/|When a superhuman AI oracle?]] * [[https://www.metaculus.com/questions/4123/time-between-weak-agi-and-oracle-asi/|When a superhuman AI oracle?]]
-  * Criterion is hopelessly vague: "reliably superhuman performance across virtually all questions of interest".+  * Criterion: "reliably superhuman performance across virtually all questions of interest"
 +  * Date depends on dull "When weak AGI?" question below.
 * [[https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/|When weak AGI?]] * [[https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/|When weak AGI?]]
   * Criteria   * Criteria
Line 54: Line 55:
 ===== Yann LeCun ===== ===== Yann LeCun =====
  
-* [[https://www.youtube.com/watch?v=u7e0YUcZYbE|2024-12]] +* [[https://www.youtube.com/watch?v=u7e0YUcZYbE|2024-12]]
   * "To have possibly a system that at least to most people feels like it has similar[sic?] intelligence as humans [..] I don't see this happening in less than 5 or 6 years."   * "To have possibly a system that at least to most people feels like it has similar[sic?] intelligence as humans [..] I don't see this happening in less than 5 or 6 years."
-* [[https://www.youtube.com/watch?v=xL6Y0dpXEwc|2024-10]] +* [[https://x.com/slow_developer/status/1871027957411782827|2024-12]] 
-  * "[intelligent] assistants are coming [..] there is a futuremaybe 10, 20 years from now, they will be really smart. We need those systems to have human-level intelligence..." +  * "There is no question that at some point in the future AI systems will match and surpass human intellectual capabilities. They will be very different from current AI systems. [..] Probably over the next decade or two. Those super-intelligent systems will do our bidding and remain under our control." 
 +* [[https://www.youtube.com/watch?v=eDY9FUT5ces|2024-10]] 
 +  * "If this project is crowned with successwe will perhaps have architectures in 7.5 years that can perhaps reach the level of human intelligence in 7.5 yearsMark Zuckerberg likes to hear me say that but I can't promise anything." 
 +* [[https://www.youtube.com/watch?v=ketW8xsL-ig|2024-03]] 
 +  * "Before we can get the to the scale and performance that we observe in humans it's going to take quite a while. [..] All of this [associative memory, reasoning, hierarchical planning] is going to take at least a decade and probably much more."
 * [[https://www.amazon.com/Architects-Intelligence-truth-people-building-ebook/dp/B07H8L8T2J|2018]] * [[https://www.amazon.com/Architects-Intelligence-truth-people-building-ebook/dp/B07H8L8T2J|2018]]
   * "There’s a whole lot of problems that will absolutely pop up, so AGI might take 50 years, it might take 100 years, I’m not too sure."   * "There’s a whole lot of problems that will absolutely pop up, so AGI might take 50 years, it might take 100 years, I’m not too sure."
Line 101: Line 105:
  
 ===== Eliezar Yudkowsky ===== ===== Eliezar Yudkowsky =====
- 
-==== Sharp Claims ==== 
  
 * [[https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse|2024-02]] "[O]ur current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." * [[https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse|2024-02]] "[O]ur current remaining timeline looks more like five years than 50 years. Could be two years, could be 10."
 * [[https://x.com/ESYudkowsky/status/1739705063768232070|2023-12]] "Default timeline to death from ASI:  Gosh idk could be 20 months could be 15 years, depends on what hits a wall and on unexpected breakthroughs and shortcuts.  People don't realize how absurdly hard it is to set bounds on this, especially on the near side.  I don't think we have a right to claim incredible surprise if we die in January." * [[https://x.com/ESYudkowsky/status/1739705063768232070|2023-12]] "Default timeline to death from ASI:  Gosh idk could be 20 months could be 15 years, depends on what hits a wall and on unexpected breakthroughs and shortcuts.  People don't realize how absurdly hard it is to set bounds on this, especially on the near side.  I don't think we have a right to claim incredible surprise if we die in January."
 * [[https://x.com/ESYudkowsky/status/1718058899537018989|2023-10]] "Who can possibly still imagine a world where a child born today goes to college 17 years later?" * [[https://x.com/ESYudkowsky/status/1718058899537018989|2023-10]] "Who can possibly still imagine a world where a child born today goes to college 17 years later?"
- 
-==== Dull Claims ==== 
- 
-* [[https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/|2023-03]] No falsifiable predictions. "Be willing to destroy a rogue datacenter by airstrike. [..] Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange." 
 * [[https://www.lesswrong.com/posts/ZEgQGAjQm5rTAnGuM/beware-boasting-about-non-existent-forecasting-track-records|2022]] Critique of Yudkowsky's betting record * [[https://www.lesswrong.com/posts/ZEgQGAjQm5rTAnGuM/beware-boasting-about-non-existent-forecasting-track-records|2022]] Critique of Yudkowsky's betting record
-* [[https://www.lesswrong.com/posts/sWLLdG6DWJEy3CH7n/imo-challenge-bet-with-eliezer|2022]] bet with Paul Christiano about AI able to solve "the hardest problem" on the International Math Olympiad by 2025. 
 * [[https://www.econlib.org/archives/2017/01/my_end-of-the-w.html|2017]] Yudkowsky bet Bryan Caplan $200 to Caplan's $100 that AI extincts humanity by 2030. * [[https://www.econlib.org/archives/2017/01/my_end-of-the-w.html|2017]] Yudkowsky bet Bryan Caplan $200 to Caplan's $100 that AI extincts humanity by 2030.
- 
  
 ===== Ajeya Cotra ===== ===== Ajeya Cotra =====
ai-predictions.1737932213.txt.gz · Last modified: 2025/01/26 15:56 by brian

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki