User Tools

Site Tools


super-intelligence

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
super-intelligence [2024/10/10 21:06] briansuper-intelligence [2025/02/15 15:22] (current) brian
Line 1: Line 1:
-===== Capabilities =====+====== Capabilities ======
 Let's analyze the essential and expected capabilities of any general intelligence, in order to determine: Let's analyze the essential and expected capabilities of any general intelligence, in order to determine:
  
Line 5: Line 5:
   * how a superintelligence would exceed the capabilities of human intelligence.   * how a superintelligence would exceed the capabilities of human intelligence.
  
-==== Core Competencies ====+===== Core Competencies =====
 Capacities are the capabilities that are most essential to intelligence. They are innate in humans, and don't need to be taught. We can't really imagine an intelligence lacking any of them. A mind significantly lacking any of them would seem severely handicapped, no matter how savant. They aren't necessarily logically or functionally independent, and they likely did not evolve independently in humans. These are the core features of intelligence. Capacities are the capabilities that are most essential to intelligence. They are innate in humans, and don't need to be taught. We can't really imagine an intelligence lacking any of them. A mind significantly lacking any of them would seem severely handicapped, no matter how savant. They aren't necessarily logically or functionally independent, and they likely did not evolve independently in humans. These are the core features of intelligence.
  
Line 17: Line 17:
   * **Language:** receiving and emitting information communicating any of the above (i.e., percepts, memories, imaginings, beliefs, feelings, empathies, intentions)   * **Language:** receiving and emitting information communicating any of the above (i.e., percepts, memories, imaginings, beliefs, feelings, empathies, intentions)
  
-==== Bonus Competencies ====+===== Bonus Competencies =====
 These are further capabilities we expect that intelligent systems will possess or can acquire. They happen not to be innate in humans but are acquired by many humans. They are so useful that it would be shocking for an advanced general intelligence not to have them. These are further capabilities we expect that intelligent systems will possess or can acquire. They happen not to be innate in humans but are acquired by many humans. They are so useful that it would be shocking for an advanced general intelligence not to have them.
  
Line 28: Line 28:
 For optimal decision-making, these competencies are mandatory minimums. They are like a chess opening book: a system might be intelligent enough to intuit or rediscover them rather than needing to be taught them, but they are so important that the system should have explicit mastery of them. For optimal decision-making, these competencies are mandatory minimums. They are like a chess opening book: a system might be intelligent enough to intuit or rediscover them rather than needing to be taught them, but they are so important that the system should have explicit mastery of them.
  
-==== Cognitive Virtues/Vices ====+===== Cognitive Virtues/Vices =====
 Virtues/vices are not so much cognitive capabilities as they are volitional habits. The ones listed below are so innate to our only known example of general intelligence (humans) that they can be considered a test suite to validate a system's ability to choose intelligently. Virtues/vices are not so much cognitive capabilities as they are volitional habits. The ones listed below are so innate to our only known example of general intelligence (humans) that they can be considered a test suite to validate a system's ability to choose intelligently.
  
Line 37: Line 37:
   * **Sensitivity:** exhibiting appropriate levels of emotion, empathy, transcendence   * **Sensitivity:** exhibiting appropriate levels of emotion, empathy, transcendence
  
-==== Bonus Powers ====+===== Bonus Powers =====
 Powers are capabilities that seem somewhat independent of the ones above, that are often not well-developed in humans, and are general-purpose rather than domain-specific. Powers are capabilities that seem somewhat independent of the ones above, that are often not well-developed in humans, and are general-purpose rather than domain-specific.
  
Line 45: Line 45:
   * **Mind Improvement:** increasing the capabilities of minds, whether self or others   * **Mind Improvement:** increasing the capabilities of minds, whether self or others
  
-===== Artificial Super-Intelligence =====+====== Artificial Super-Intelligence ======
  
-==== Bostrom Superpowers ====+===== Bostrom Superpowers =====
 Three of the superpowers are hand-waving: Three of the superpowers are hand-waving:
  
Line 60: Line 60:
   * **Hacking:** Including this on the list is like including lock-picking or safe-cracking. Digital security is subject to algorithmic constraints that bind all possible minds. The relevant superpower here would be social engineering, but that's part of Social Manipulation.   * **Hacking:** Including this on the list is like including lock-picking or safe-cracking. Digital security is subject to algorithmic constraints that bind all possible minds. The relevant superpower here would be social engineering, but that's part of Social Manipulation.
  
-==== Upgrades ====+===== Upgrades =====
 Dimensions of improvement available to AGI that are not available to humans: Dimensions of improvement available to AGI that are not available to humans:
  
Line 78: Line 78:
   * Quantum computing   * Quantum computing
  
-===== Notes =====+===== Anthropomorphizing AI ===== 
 + 
 +* LLMs have a relatively self-consistent and monolithic system of values and goals 
 +*  
 + 
 +====== Notes ======
   * A hint that individual intelligence has diminishing returns is that our advances come far more from minds cooperating than from individual brilliance. Human organizations and societies are far more capable than the most brilliant humans, and yet organizations do not recognizably possess the standard capabilities of intelligence.   * A hint that individual intelligence has diminishing returns is that our advances come far more from minds cooperating than from individual brilliance. Human organizations and societies are far more capable than the most brilliant humans, and yet organizations do not recognizably possess the standard capabilities of intelligence.
   * The "capabilities" list above constitutes a report card for Artificial General Intelligence. Any claim that a particular AI system is a step toward AGI needs to include an explanation of how the system would mutually leverage these capacities. Nearly all narrow AI systems are not poised to mutually leverage any of these capacities. Such systems are essentially just tools that can extend a narrow ability of a general intelligence -- like night-vision goggles.   * The "capabilities" list above constitutes a report card for Artificial General Intelligence. Any claim that a particular AI system is a step toward AGI needs to include an explanation of how the system would mutually leverage these capacities. Nearly all narrow AI systems are not poised to mutually leverage any of these capacities. Such systems are essentially just tools that can extend a narrow ability of a general intelligence -- like night-vision goggles.
Line 123: Line 128:
   * 2002 Yudkowsky:   * 2002 Yudkowsky:
     * "primate evolution stumbled across a fitness gradient whose path includes [..] one particular kind of general intelligence"     * "primate evolution stumbled across a fitness gradient whose path includes [..] one particular kind of general intelligence"
-  * Challenges+  * skull birth canal limit 
 +  * Challenges/Predictions
     * Understand cetacean language (unless it's trivial like meerkat language)     * Understand cetacean language (unless it's trivial like meerkat language)
     * For a >10Kloc >5yo production system, without human supervision or intervention:     * For a >10Kloc >5yo production system, without human supervision or intervention:
Line 131: Line 137:
       * Resolve all the TODO comments       * Resolve all the TODO comments
       * Refactor all significant code duplication       * Refactor all significant code duplication
 + 
 +It's similarly invalid to analogize the gap between super-intelligence and human intelligence to the gap between humans and some dumber species, be it chimp or ant or bacterium. Such naive analogies assume intelligence is an open-ended scalar capacity, like speed or strength or longevity. The analogy leans heavily on the (dumb?) notion that a dumber entity can't fathom the notion of a 
 +
 +since an ant or chimp can't fathom having human-level intelligence, therefore humans must not be able to fathom having super-human intelligence.
 +
 +
 +
 +One might as well just invoke "super-ness"
 +
 +Saturation: toxicity, visual acuity, height, camouflage, healing, invulnerability
 +
 +Saturation: algorithmic, computational, domain (tic tac toe), information-theoretic
 +
 +Such analogies assume intelligence is an open-ended scalar, like 
 +
 +altitude, latitude, temperature
 +
 +mass, power, body temperature, speed, toxicity, visual acuity, smell acuity
 +
 +self-awareness, inerrancy, consciousness, volition, self-consistency, shrewdness, reasonableness, wit, creativity
 +
 +integers, reals, aleph
 +
 +super-ness
 +
 +But assumptions are no more privileged than an analogy to a quantity that has a maximum built into its definition -- such as latitude or humidity or albedo.
 +
 +
 +
 +A better analogy would be to scales with definitions that are more subtle, while still being objectively defined, such as hardness, loudness, or sharpness. 
 +
 +Critique claim that human intelligence would be much higher without skull birth canal limit.
  
super-intelligence.1728615983.txt.gz · Last modified: 2024/10/10 21:06 by brian

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki