User Tools

Site Tools


super-intelligence

**This is an old revision of the document!**

Capabilities

Let's analyze the essential and expected capabilities of any general intelligence, in order to determine:

  • what capabilities a system must have to be intelligent, and
  • how a superintelligence would exceed the capabilities of human intelligence.

Core Competencies

Capacities are the capabilities that are most essential to intelligence. They are innate in humans, and don't need to be taught. We can't really imagine an intelligence lacking any of them. A mind significantly lacking any of them would seem severely handicapped, no matter how savant. They aren't necessarily logically or functionally independent, and they likely did not evolve independently in humans. These are the core features of intelligence.

  • Perception: accessing and processing information about the external world
  • Memory: storing and associatively recalling relevant information, especially beliefs acquired via induction or language
  • Imagination: generating hypothetical information about internal/external states, especially using beliefs acquired via induction or language
  • Induction: extracting and testing new patterns (concepts, beliefs, models, causal theories, predictions) from particular perceptions or other recognized patterns
  • Consciousness: introspection and induction about self/identity
  • Empathy: induction about the internal states and future behaviors of other agents
  • Volition: goal-directed intentional behavior
  • Language: receiving and emitting information communicating any of the above (i.e., percepts, memories, imaginings, beliefs, feelings, empathies, intentions)

Bonus Competencies

These are further capabilities we expect that intelligent systems will possess or can acquire. They happen not to be innate in humans but are acquired by many humans. They are so useful that it would be shocking for an advanced general intelligence not to have them.

  • Formal reasoning: logic, math, probability theory, game theory
  • Strategizing: planning, forecasting, manipulating, deceiving, especially under competition
  • Economics: accounting, portfolio theory, asset pricing
  • Epistemology: philosophy of science, critical thinking
  • Axiology: ethics

For optimal decision-making, these competencies are mandatory minimums. They are like a chess opening book: a system might be intelligent enough to intuit or rediscover them rather than needing to be taught them, but they are so important that the system should have explicit mastery of them.

Cognitive Virtues/Vices

Virtues/vices are not so much cognitive capabilities as they are volitional habits. The ones listed below are so innate to our only known example of general intelligence (humans) that they can be considered a test suite to validate a system's ability to choose intelligently.

  • Shrewdness: choosing with strategic planning and discipline, especially in a way that minimizes regret (the realization that the same information and goals should have yielded a different choice)
  • Inerrancy: avoiding obvious mistakes and “unforced errors”
  • Incorruptibility: resistance to vanity, sloth, intemperance, envy, anger, malice
  • Rationality: avoiding superstition and cognitive biases
  • Sensitivity: exhibiting appropriate levels of emotion, empathy, transcendence

Bonus Powers

Powers are capabilities that seem somewhat independent of the ones above, that are often not well-developed in humans, and are general-purpose rather than domain-specific.

  • Creativity: generating valuable novelty
  • Sociability: producing better results by interacting with other minds
  • Wit: using humor and cleverness
  • Mind Improvement: increasing the capabilities of minds, whether self or others

Artificial Super-Intelligence

Bostrom Superpowers

Three of the superpowers are hand-waving:

  • Intelligence Amplification: Intelligence is an emergent property. Editing minds is hard. Runaway self-improvement (i.e., increasing returns rather than diminishing returns) is hand-waving.
  • Technology Research: If this super power is independent of intelligence amplification, then we'd need a non-hand-waving story about how the constraints that apply here to humans could be lifted for AI. Invoking “biotechnology, nanotechnology” doesn't explain how AI would have a qualitative advantage over humans in developing them.
  • Economic Productivity: “Various skills” is the only unpacking given for this superpower.

The other three superpowers involve built-in limits that bind all possible minds due to the nature of the target domain:

  • Strategizing: The limits of strategizing are already visible. Hari Seldon is not an option. You can make a thousand-year plan for techno-social trends and climate, but not for techno-social events or weather.
  • Social Manipulation: The super-ness of this superpower is isomorphic to the power of the society being manipulated. The main danger of a single world government is that it could be taken over by a Hitler, Stalin, or Mao. No extra super-ness required, or even possible. For example, no super-intelligence could create a real-life version of Monty Python's World's Funniest Joke.
  • Hacking: Including this on the list is like including lock-picking or safe-cracking. Digital security is subject to algorithmic constraints that bind all possible minds. The relevant superpower here would be social engineering, but that's part of Social Manipulation.

Upgrades

Dimensions of improvement available to AGI that are not available to humans:

  • Processing speed/stamina
  • Component count
  • Interconnectedness
  • Memory capacity
  • I/O bandwidth
  • Expandability: ability to increase above capacities on demand
  • Processor/memory reliability
  • Internal signal propagation speed
  • Multitasking: multiple threads of activity/control/attention within a mind
  • Multiprocessing: multiple minds within a “brain”
  • Elastic distributed computing: scaling brain count up/down on demand
  • Introspection
  • Modifiability by self or others
  • Quantum computing

Anthropomorphizing AI

  • LLMs have a relatively self-consistent and monolithic system of values and goals

Notes

  • A hint that individual intelligence has diminishing returns is that our advances come far more from minds cooperating than from individual brilliance. Human organizations and societies are far more capable than the most brilliant humans, and yet organizations do not recognizably possess the standard capabilities of intelligence.
  • The “capabilities” list above constitutes a report card for Artificial General Intelligence. Any claim that a particular AI system is a step toward AGI needs to include an explanation of how the system would mutually leverage these capacities. Nearly all narrow AI systems are not poised to mutually leverage any of these capacities. Such systems are essentially just tools that can extend a narrow ability of a general intelligence – like night-vision goggles.
  • Crude analogies: super-intelligence is to human as human is to: ants, chimps, children
  • AGI alignment is a similar problem to aligning teenagers, or aligning future generations of humans. In both cases, religion and totalitarianism are the least-ineffective “solutions” that humans have found.
    • Self-modifying AGI has the alignment problem too. The argument about humans not foreseeing AGI values apply just as well to AGI vN not foreseeing AGI vN+1 values.
  • Text extrapolators: GPT, PaLM, LaMDA
  • The Cargo Cult Argument For AGI:
    • An AGI could X {prove theorems, recognize images, win chess, drive cars, fold proteins, translate/extrapolate texts, etc.}.
    • Narrow AI system Y can X.
    • Therefore Y can be AGI if upgraded enough.
  • Maximality
    • Northness, coldness, speed of light
  • Turing Test Questions
    • Turing test questions for each capability
    • Can this capability just emerge from AGI systems not explicitly trying to add it?
    • How much better could each capability get?
    • What could a SI do with a capability that no human/group could?
  • Limits
    • Logical, mathematical, computational
    • Physical: c, quantum uncertainty, thermodynamics, conservation
    • Economic
    • In what ways would/could the intellectual output of a super-intelligence differ from the output of human society, science, and markets? (It's almost a trick question. If you can describe a possible intellectual output of super-intelligence, then human society could surely produce it if given sufficient time and non-AGI resources.)
    • Intelligence researchers fundamentally over-estimate how intelligent humans are. No human exhibits uniquely human-level intelligences for more than a small fraction of her waking hours. Very few humans consistently apply intelligence to important projects in their lives. The vast majority of the achievements of human intelligence are mostly not the result of geniuses, but rather the result of non-intelligent processes and institutions that harness the outputs of humans of various levels of intelligence. All of us isn't smarter than any of us, but all of us collectively does things that no genius among us can do or even understand.
    • “I find it difficult to credit that a bound holding for minds in general on all physical substrates coincidentally limits intelligence to the exact level of the very first hominid subspecies to evolve to the point of developing computer scientists.” [2002 Yudkowsky - Levels of Organization in General Intelligence]
      • The bound might not be a technological one, but rather an algorithmic one. No substrate can circumvent logical paradoxes, or recognize more patterns than exist in the data, or prove P ≠ NP.
  • Emulation Limits
    • cetacean language?
  • how did H. sapiens become intelligent?
  • how does a brain become intelligent?
  • how did human society become intelligent, i.e., technological?
    • culture
    • improved language?
    • threshold technology? e.g., energy, food production, metallurgy
    • capital accumulation?
    • shelling out
    • specialization
  • Cyc
  • SOAR
  • Programming languages, IDEs, CAD, machine tools, manufacturing robots, Wikipedia
  • Self-improvement: diminishing returns vs accelerating returns
  • Nazi German science vs Allied science
  • 2002 Yudkowsky:
    • “primate evolution stumbled across a fitness gradient whose path includes [..] one particular kind of general intelligence”
  • skull birth canal limit
  • Challenges/Predictions
    • Understand cetacean language (unless it's trivial like meerkat language)
    • For a >10Kloc >5yo production system, without human supervision or intervention:
      • Port to a significantly different language or platform technology
      • Refactor functionality to make a significantly different design trade-off between e.g., storage space vs. runtime performance
      • Add an entirely novel category of major functionality e.g., rigorously applying complex human-readable business rules
      • Resolve all the TODO comments
      • Refactor all significant code duplication

It's similarly invalid to analogize the gap between super-intelligence and human intelligence to the gap between humans and some dumber species, be it chimp or ant or bacterium. Such naive analogies assume intelligence is an open-ended scalar capacity, like speed or strength or longevity. The analogy leans heavily on the (dumb?) notion that a dumber entity can't fathom the notion of a

since an ant or chimp can't fathom having human-level intelligence, therefore humans must not be able to fathom having super-human intelligence.

One might as well just invoke “super-ness”

Saturation: toxicity, visual acuity, height, camouflage, healing, invulnerability

Saturation: algorithmic, computational, domain (tic tac toe), information-theoretic

Such analogies assume intelligence is an open-ended scalar, like

altitude, latitude, temperature

mass, power, body temperature, speed, toxicity, visual acuity, smell acuity

self-awareness, inerrancy, consciousness, volition, self-consistency, shrewdness, reasonableness, wit, creativity

integers, reals, aleph

super-ness

But assumptions are no more privileged than an analogy to a quantity that has a maximum built into its definition – such as latitude or humidity or albedo.

A better analogy would be to scales with definitions that are more subtle, while still being objectively defined, such as hardness, loudness, or sharpness.

Critique claim that human intelligence would be much higher without skull birth canal limit.

Examples of Super-ness

  • Biological
    • evolutionary advances
    • Columbian exchange
    • invasive species
  • Historical - Actual
    • 1415 West Africa vs. Portugal
    • 1492 Taino vs Spain
    • 1519 Aztecs vs. Spain
    • 1532 Incas vs. Spain
    • 1500s Filipinos vs. Spain
    • 1652 Khoe/San et al vs. Dutch Cape Colony
    • 1770 Aboriginal Australia vs. Britain
    • 1778 Hawaiians vs. Britain
    • 1700s Indigenous Siberians vs. Russia
    • 1788 Maoiri vs. Britain
    • 1804 Aboriginal Tasmania vs. Britain
    • 1800s Pacific Islanders vs. Europeans
    • 1800s Plains Amerindians vs. U.S.
    • 1840 Qing China vs. Britan
    • 1853 Edo Japan vs U.S.
    • 1940s Melanesians vs. U.S. (cargo cults)
    • British East India Company (or 1498 Portugal?)
  • Counterfactual
    • nuclear weaponry
  • Technological
super-intelligence.1739653422.txt.gz · Last modified: 2025/02/15 14:03 by brian

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki