User Tools

Site Tools


ai-predictions

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
ai-predictions [2025/01/24 23:41] brianai-predictions [2025/01/26 20:24] (current) – [Dull Claims] brian
Line 42: Line 42:
   * I say 1%, but because of AI enabling nukes and pandemic, not ASI.   * I say 1%, but because of AI enabling nukes and pandemic, not ASI.
 * [[https://www.metaculus.com/questions/4123/time-between-weak-agi-and-oracle-asi/|When a superhuman AI oracle?]] * [[https://www.metaculus.com/questions/4123/time-between-weak-agi-and-oracle-asi/|When a superhuman AI oracle?]]
-  * Criterion is hopelessly vague: "reliably superhuman performance across virtually all questions of interest".+  * Criterion: "reliably superhuman performance across virtually all questions of interest"
 +  * Date depends on dull "When weak AGI?" question below.
 * [[https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/|When weak AGI?]] * [[https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/|When weak AGI?]]
   * Criteria   * Criteria
Line 51: Line 52:
   * The Turing test is under-specified and the gaming criterion is non-general.   * The Turing test is under-specified and the gaming criterion is non-general.
  
 +
 +===== Yann LeCun =====
 +
 +* [[https://www.youtube.com/watch?v=u7e0YUcZYbE|2024-12]]
 +  * "To have possibly a system that at least to most people feels like it has similar[sic?] intelligence as humans [..] I don't see this happening in less than 5 or 6 years."
 +* [[https://x.com/slow_developer/status/1871027957411782827|2024-12]]
 +  * "There is no question that at some point in the future AI systems will match and surpass human intellectual capabilities. They will be very different from current AI systems. [..] Probably over the next decade or two. Those super-intelligent systems will do our bidding and remain under our control."
 +* [[https://www.youtube.com/watch?v=eDY9FUT5ces|2024-10]]
 +  * "If this project is crowned with success, we will perhaps have architectures in 7.5 years that can perhaps reach the level of human intelligence in 7.5 years. Mark Zuckerberg likes to hear me say that but I can't promise anything."
 +* [[https://www.youtube.com/watch?v=ketW8xsL-ig|2024-03]]
 +  * "Before we can get the to the scale and performance that we observe in humans it's going to take quite a while. [..] All of this [associative memory, reasoning, hierarchical planning] is going to take at least a decade and probably much more."
 +* [[https://www.amazon.com/Architects-Intelligence-truth-people-building-ebook/dp/B07H8L8T2J|2018]]
 +  * "There’s a whole lot of problems that will absolutely pop up, so AGI might take 50 years, it might take 100 years, I’m not too sure."
  
 ===== Andrej Karpathy ===== ===== Andrej Karpathy =====
Line 74: Line 88:
 ===== Gary Marcus ===== ===== Gary Marcus =====
  
-* 20025-01 [[https://x.com/GaryMarcus/status/1874587110809682092|25 predictions on where AI will be at the end of 2025]] Mostly vague or uninteresting.+* 20025-01 [[https://x.com/GaryMarcus/status/1874587110809682092|25 predictions on where AI will be at the end of 2025]] Mostly short-term or tactical.
   * "Less than 10% of the work force will be replaced by AI. Probably less than 5%." Definitely less than 1%.   * "Less than 10% of the work force will be replaced by AI. Probably less than 5%." Definitely less than 1%.
 * 2024-04 [[https://garymarcus.substack.com/p/superhuman-agi-is-not-nigh|9 Things AI Won't Do In 2025]] All safe bets for 2025, but these goals will mostly be met over the next 15 years. * 2024-04 [[https://garymarcus.substack.com/p/superhuman-agi-is-not-nigh|9 Things AI Won't Do In 2025]] All safe bets for 2025, but these goals will mostly be met over the next 15 years.
Line 84: Line 98:
   * Care-giving: robots will be increasingly trusted for this 2035-2040   * Care-giving: robots will be increasingly trusted for this 2035-2040
   * Nobel-level discoveries: AI collaboration will expand, but no fully autonomous Nobel science until 2040-2060   * Nobel-level discoveries: AI collaboration will expand, but no fully autonomous Nobel science until 2040-2060
-* 2022-05 [[https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things|5 Things AI Won't Do In 2029]] Marcus might quibble, but will be wrong on essentially all 5: +* 2022-05 [[https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things|5 Things AI Won't Do In 2029]] Marcus could quibble, but I think will be arguably wrong on essentially all 5: 
-  * Understand a movie or novel: yes +  * Understand a movie or novel: won't be a problem 
-  * Work as a kitchen cook: won't yet be practical, but the demos will be impressive +  * Work as a kitchen cook: won'quite yet be practical, but the demos will be impressive and the goal in sight 
-  * Reliably write 10Kl of bug-free code from English spec or via non-expert guidance: easy, but critics will point out that "bug-free" requires the spec/guidance to be very detailed +  * Reliably write 10Kl of bug-free code from English spec or via non-expert guidance: yes, but "bug-free" requires the spec/guidance to be so detailed as to be expert-level 
-  * Ingest arbitrary natural-language math proofs for symbolic verification: yes, modulo quibbling over corner cases+  * Ingest arbitrary natural-language math proofs for symbolic verification: yes, unless "arbitrary" is read as "any and every"
  
 ===== Eliezar Yudkowsky ===== ===== Eliezar Yudkowsky =====
- 
-==== Sharp Claims ==== 
  
 * [[https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse|2024-02]] "[O]ur current remaining timeline looks more like five years than 50 years. Could be two years, could be 10." * [[https://www.theguardian.com/technology/2024/feb/17/humanitys-remaining-timeline-it-looks-more-like-five-years-than-50-meet-the-neo-luddites-warning-of-an-ai-apocalypse|2024-02]] "[O]ur current remaining timeline looks more like five years than 50 years. Could be two years, could be 10."
 * [[https://x.com/ESYudkowsky/status/1739705063768232070|2023-12]] "Default timeline to death from ASI:  Gosh idk could be 20 months could be 15 years, depends on what hits a wall and on unexpected breakthroughs and shortcuts.  People don't realize how absurdly hard it is to set bounds on this, especially on the near side.  I don't think we have a right to claim incredible surprise if we die in January." * [[https://x.com/ESYudkowsky/status/1739705063768232070|2023-12]] "Default timeline to death from ASI:  Gosh idk could be 20 months could be 15 years, depends on what hits a wall and on unexpected breakthroughs and shortcuts.  People don't realize how absurdly hard it is to set bounds on this, especially on the near side.  I don't think we have a right to claim incredible surprise if we die in January."
 * [[https://x.com/ESYudkowsky/status/1718058899537018989|2023-10]] "Who can possibly still imagine a world where a child born today goes to college 17 years later?" * [[https://x.com/ESYudkowsky/status/1718058899537018989|2023-10]] "Who can possibly still imagine a world where a child born today goes to college 17 years later?"
- 
-==== Dull Claims ==== 
- 
-* [[https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/|2023-03]] No falsifiable predictions. "Be willing to destroy a rogue datacenter by airstrike. [..] Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange." 
 * [[https://www.lesswrong.com/posts/ZEgQGAjQm5rTAnGuM/beware-boasting-about-non-existent-forecasting-track-records|2022]] Critique of Yudkowsky's betting record * [[https://www.lesswrong.com/posts/ZEgQGAjQm5rTAnGuM/beware-boasting-about-non-existent-forecasting-track-records|2022]] Critique of Yudkowsky's betting record
-* [[https://www.lesswrong.com/posts/sWLLdG6DWJEy3CH7n/imo-challenge-bet-with-eliezer|2022]] bet with Paul Christiano about AI able to solve "the hardest problem" on the International Math Olympiad by 2025. +* [[https://www.econlib.org/archives/2017/01/my_end-of-the-w.html|2017]] Yudkowsky bet Bryan Caplan $200 to Caplan's $100 that AI extincts humanity by 2030.
-* [[https://www.econlib.org/archives/2017/01/my_end-of-the-w.html|2017]] Yudkowsky bet Bryan Caplan $200 to Caplan's $100]] that AI extincts humanity by 2030. +
  
 ===== Ajeya Cotra ===== ===== Ajeya Cotra =====
  
 * [[https://drive.google.com/drive/u/0/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP|2020]] Draft Report on Biological Anchors * [[https://drive.google.com/drive/u/0/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP|2020]] Draft Report on Biological Anchors
- 
-===== Yann LeCun ===== 
  
 ===== Robin Hanson ===== ===== Robin Hanson =====
ai-predictions.1737787260.txt.gz · Last modified: 2025/01/24 23:41 by brian

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki