Possible Future Global Catastrophes

An excerpt from the online hypertext Human Knowledge: Foundations and Limits.
Natural Catastrophes
Earthquake. Earthquakes and floods will through 2300 still occasionally kill tens of thousands of humans in developing societies. Of even greater historical consequence would be a possible massive earthquake in Tokyo, San Francisco, or Los Angeles. Such a quake could cause on the order of a trillion dollars in damage and could trigger a worldwide depression. In the worst case this would set back human progress by perhaps a decade.

Pandemic. How much of humanity could be killed in the future by a naturally-arising pathogen? In the 1500s and 1600s, European epidemics killed perhaps 90% of the aboriginal Americans. In the 1400s, the plague killed one third of the humans in Europe. The worldwide influenza of 1918 killed 30 million, and AIDS had killed at least half that by 2000. It seems unlikely that a natural pathogen could kill more than a small fraction of humanity, especially given modern sanitation. Evolutionary pressures tend to make pathogens less virulent over time, and newly-arising pathogens rarely seem to extinct their host species even in their initial outbreak. Genetically-engineered pathogens may be different.

Alien Aggression. The arrival of extraterrestrial intelligence on Earth might seem to pose a threat to human civilization. The arrival of Homo sapiens sapiens in Europe heralded the end of Homo sapiens neanderthalensis. The arrival of Home sapiens in Australia and the Americas quickly led to the extinction of most of the native megafauna. Contact with farming civilization has almost invariably led to the decline or assimilation of hunter-gatherer cultures. Contact with industrial civilization has almost invariably caused severe disruption in pre-industrial civilizations.

Fortunately, ETI would be unlikely to colonize Earth. Biochemical differences would surely render Earth life inedible to any ETI that had not yet become machine-based. As modern economic experience shows, raw human labor is too easy to automate to make enslavement worthwhile. Earth has deuterium-rich oceans of water, but even more water is available on Europa, which is also not as deep in the Sun's gravity well. Except for its reactive oxygen atmosphere, Earth's climate is relatively benign and might be an attractive place to establish an ETI population. However, space faring ETI would probably value Earth more for studying than for exploiting. Space faring ETI could just as easily satisfy its resource needs using the uninhabited parts of the solar system. Note that an alien Von Neumann probe could pose a variant of the robot aggression or nanoplague catastrophes.

Interplanetary Impact. The impact on Earth of an asteroid or comet only a few miles across would have devastating blast, tidal wave, incendiary, and smoke effects. In particular, the global pall of smoke raised by such an impact could block enough sunlight to effectively cancel one or two agricultural seasons and starve billions of humans to death. Such a catastrophe would set back human progress by one or two centuries. With five or ten year's warning, humanity could mount a mission to prevent such an impact by adjusting the impacter's orbit. The probability of such an impact is extremely low, only happening every few hundred thousand years. Less probable by far is impact with or orbital disruption by a small black hole that might wander through the Solar system. Impact with a black hole would effectively destroy the surface of the earth and most or all life on it. Disruption of the Earth's orbit could cause a biosphere-destroying runaway greenhouse effect like on Venus. Even a slight increase in the eccentricity of Earth's orbit would cause ecological disruptions that would probably starve billions of humans. Ejection of Earth from the Solar system would in a matter of months freeze to death all terrestrial life (except perhaps ecosystems around volcanic vents at the bottoms of frozen oceans). Humanity will not be safe from such an event until its first self-sustaining extraplanetary colonies are created around 3000.

Supernova. A supernova would have to be within a few tens of light years of Earth for its radiation to endanger creatures living at the bottom of Earth's atmosphere. No stars that close to Earth will go supernova in the next few million years.

Ice Age. When Earth's next ice age arrives in 10,000 years or so, it will grant slight but welcome relief from the problem of heat pollution.

Magnetic Field Reversal. Earth's magnetic field reverses polarity every few hundred thousand years, and is almost non-existent for perhaps a century during the transition. The last reversal was 780 Kya, and the magnetic field's strength decreased 5% during the 20th century. During the next reversal the ozone layer will be unprotected from charged solar particles that could weaken its ability to protect humans from ultraviolet radiation. However, past reversals are not associated with any changes or extinctions in the fossil record, and the next reversal will not likely affect humanity in a catastrophic way.

Man-made Catastrophes
Nuclear Catastrophe. Nuclear power could result in three kinds of catastrophe: radioactive pollution, limited nuclear bombing, and general nuclear war. Accidental or deliberate radioactive pollution could kill tens or hundreds of thousands, but is quite unlikely to happen. Regional nuclear conflict in the Middle East or the Indian subcontinent could kill several million. Nuclear terrorism against Washington D.C. or New York City could kill more than a million and set back human progress by up to a decade. General nuclear war would kill hundreds of millions and could trigger a nuclear winter that might starve hundreds of millions more. While such a worst case would set back human progress by one or two centuries, existing nuclear arsenals could neither extinct humanity nor end human civilization.

Cultural Decline.  Some humans fear that vice, crime, and corruption indicate ongoing social decline or impending collapse. Other humans fear that problems of class division, pollution, education, and infrastructure indicate economic decline or impending collapse. These fears are perennial and unfounded. Past examples of the drastic decline or collapse of a culture or civilization have almost always been due to environmental change, or infection or invasion by outside humans. But after the advent of continental steam locomotion in the mid-1800s, no society remains unexposed to the infections of the others. Similarly, all societies have been made part of a single global human civilization which is not subject to invasion by outside humans. Environmental change indeed poses a set of challenges, but they seem to represent constraints on growth rather than seeds of collapse.

Cultural stagnation is another possible (but milder) kind of potential catastrophe. As in Ming China, Middle Ages Europe, or the Soviet Bloc, stagnation can result if a static ideology takes hold and suppresses dissent. Such a development seems unlikely, given the intellectual freedom and communication technology of the modern world. Ideologies with totalitarian potential include fideist religions, communism, and ecological primitivism.

Bioterrorism.Could a pathogen be genetically designed to be virulent enough to extinct humanity?A pathogen would have to be designed to spread easily from person to person, persist in the environment, resist antibiotics and immune responses, and cause almost 100% mortality. Designing for long latency (e.g. months) might be necessary to ensure wide spread, but no length may be enough to infect every last human.

Robot Aggression. Some humans fear that the combination of robotics and artificial intelligence will in effect create a new dominant species that will not tolerate human control or even resource competition. These fears are misplaced. Artificial intelligence will be developed gradually by about 2200, and will not evolve runaway super-intelligence. Even when AI is integrated with artifactual life by the early 2200s, the time and energy constraints on artifactual persons will render them no more capable of global domination than any particular variety of humans (i.e. natural persons). Similarly, humanity's first Von Neumann probes will be incapable of overwhelming Earth's defenses even if they tried. To be truly dangerous, VN probes would have to be of a species with both true intelligence and a significant military advantage over humanity. Such a species would be unlikely to engage in alien aggression.

Nanoplague. Self-replicating nanotechnology could in theory become a cancer to the Earth's biosphere, replacing all ribonucleic life with nanotech life. The primary limit on the expansion of such nanotech life would, as for all life, be the availability of usable energy and material. Since any organic material would presumably be usable, the primary limit on how nanocancer could consume organic life would be the availability of usable energy. Fossil fuels are not sufficiently omnipresent, and fusion is not sufficiently portable, so nanocancer would, like ribonucleic microorganisms, have to feed on sunlight or organic tissues. Ribonucleic photosynthesis captures a maximum of about 10% of incident solar energy, while nanocancer should be able to capture at least 50%. The only way to stop nanocancer would be to cut off its access to energy and material or interfere with its mechanisms for using them.