Jump to content

Draft:Vulnerable world hypothesis

From Wikipedia, the free encyclopedia
  • Comment: If this is not properly sourced is definitely WP:OR even though I know if this concept personally. Vanderwaalforces (talk) 14:53, 8 November 2023 (UTC)
  • Comment: Not enough independent, significant coverage showing this is notable enough for a Wikipedia article. WikiOriginal-9 (talk) 14:40, 7 November 2023 (UTC)

The vulnerable world hypothesis[1] or the "black ball" hypothesis[2] refers to the idea that civilizations may likely be destroyed by some disruptive technologies (a black ball) unless extraordinary measures are taken against the scenario from happening. The philosopher Nick Bostrom introduced the hypothesis in an initial publication in 2019 in the journal Global Policy[3][1] and later further discussed in a 2022 essay published in Aeon along with co-author Matthew van der Merwe.[4] The hypothesis has often been quoted in the discussion of safety aspects of advanced technologies.[5][6]

Introduction

[edit]

Background and definition

[edit]

Bostrom illustrated the hypothesis using an urn analogy. He likened the process of technological invention to drawing balls from an urn where the color of balls represents their impact. White balls are beneficial and constitute most of the balls drawn from the urn. Some balls are gray, which represent technologies with mixed or moderate effects. Black balls represent hypothetical technologies that tend to destroy by default the civilization that invents it. According to Bostrom, it is thanks to sheer luck that humanity hasn't encountered a black ball yet.[5]

Bostrom defined the vulnerable world hypothesis as the possibility that "If technological development continues then a set of capabilities will at some point be attained that make the devastation of civilization extremely likely, unless civilization sufficiently exits the semi-anarchic default condition."[3] except in some specific cases.[a] The "semi-anarchic default condition" refers here to having:[3][7]

  1. Limited capacity for preventive policing.
  2. Limited capacity for global governance.
  3. Actors with diverse motivations[b]

Types of vulnerabilities

[edit]

To exemplify the vulnerabilities, Bostrom proposed a classification system and gave examples of how technology could have gone wrong, and policy recommendations such as differential technological development.[5][3] If a technology that entails such a vulnerability is developed, the solutions supposed to be needed to survive (i.e. effective global governance or preventive policing depending on the type of vulnerability) are controversial.[5][6][8] The classification includes:[3][1]

  • Type 0 ("surprising strangelets") : a technology carries a hidden risk and inadvertently devastates the civilization.

A proposed hypothetical example of this is if nuclear bombs had been able to ignite the atmosphere. Nuclear ignition was predicted not to occur for the Trinity nuclear test in a report commissioned by Robert Oppenheimer. But the report has been deemed shaky given the potential consequences : "One may conclude that the arguments of this paper make it unreasonable to expect that the N + N reaction could propagate. An unlimited propagation is even less likely. However, the complexity of the argument and the absence of satisfactory experimental foundation makes further work on the subject highly desirable."[5]

  • Type 1 ("easy nukes") : a technology gives small groups of people the ability to cause mass destruction.

The "easy nukes" thought experiment proposed by Nick Bostrom opens the question of what would have happened if nuclear chain reactions had been easier to produce, for example by "sending an electric current through a metal object placed between two sheets of glass."[5]

  • Type 2a ("safe first strike") : a technology has the potential to devastate the civilization, and powerful actors are incentivized to use it, potentially because using it first seems to bring an advantage, or because of some tragedy of the commons scenario.
  • Type 2b ("worse global warming") : a great many actors face incentives to take some slightly damaging action such that the combined effect of those actions is civilizational devastation.

Reception

[edit]

Discussion and development

[edit]

Bostrom discussed the idea publicly in an TED conference interview in 2019 with the host Chris Anderson.[9]

Implications

[edit]

Pausing the technological progress may not be possible or desirable. An alternative would be to prioritize the technologies that are expected to have a positive impact, and to delay those that may be catastrophic, a principle called differential technological development.[5]

The potential solutions varies depending on the type of vulnerability. Dealing with type-2 vulnerabilities may require a very effective governance and international cooperation. For type-1 vulnerabilities, if mass-destruction ever gets accessible to individuals, there may be at least some small fraction of the population that would use it.[5] In extreme cases, mass surveillance might be required to avoid the destruction of civilization, a prospect that received significant media coverage.[10][11][12][13][14]

Technologies that have been proposed as potential vulnerabilities are advanced artificial intelligence, nanotechnology and synthetic biology (synthetic biology may give the ability to easily create enhanced pandemics).[15][2][16][17]

Footnotes

[edit]
  1. ^ It depends, according to Nick Bostrom, on whether society is in a "semi-anarchic default condition" (see § Definitions).
  2. ^ And in particular, the motivation of at least some small fraction of the population to destroy the civilization even at a personal cost. According to Bostrom : “Given the diversity of human character and circumstance, for any ever so imprudent, immoral, or self-defeating action, there is some residual fraction of humans who would choose to take that action.”[5]

References

[edit]
  1. ^ a b c Bostrom, Nick (November 2019). "The Vulnerable World Hypothesis". Global Policy. 10 (4): 455–476. doi:10.1111/1758-5899.12718.
  2. ^ a b Bilton, Nick (2018-11-28). "The "Black Ball" Hypothesis: Is Gene Editing More Dangerous Than Nuclear Weapons?". Vanity Fair. Retrieved 2023-11-07.
  3. ^ a b c d e Katte, Abhijeet (2018-12-25). "AI Doomsday Can Be Avoided If We Establish 'World Government': Nick Bostrom". Analytics India Magazine. Retrieved 2023-05-28.
  4. ^ Bostrom, Nick; van der Merwe, Matthew (2022). "None of our technologies has managed to destroy humanity – yet". Aeon.
  5. ^ a b c d e f g h i Piper, Kelsey (2018-11-19). "How technological progress is making it likelier than ever that humans will destroy ourselves". Vox. Retrieved 2023-05-28.
  6. ^ a b Finley, Klint. "Technology That Could End Humanity—and How to Stop It". Wired. ISSN 1059-1028. Retrieved 2023-11-07.
  7. ^ "Notes on the Vulnerable World Hypothesis". michaelnotebook.com.
  8. ^ "How to Protect Humanity From the Invention That Inadvertently Kills Us All". Inverse. 2019-04-18. Retrieved 2023-11-07.
  9. ^ Bostrom, Nick (2019-12-19). How civilization could destroy itself -- and 4 ways we could prevent it – via www.ted.com.
  10. ^ Houser, Kristin (19 April 2019). "Professor: Total surveillance is the only way to save humanity". Futurism. Retrieved 2023-05-28.
  11. ^ Bendix, Aria. "An Oxford philosopher who's inspired Elon Musk thinks mass surveillance might be the only way to save humanity from doom". Business Insider. Retrieved 2023-05-28.
  12. ^ Taggart, Dagny (2019-04-24). "Global Government and Surveillance May Be Needed to Save Humanity". The Organic Prepper. Retrieved 2023-10-16.
  13. ^ Gheorghe, Ana (2019-04-27). "Mass surveillance could save us from extinction, claims Professor". Cherwell. Retrieved 2023-10-16.
  14. ^ "None of our technologies has managed to destroy humanity – yet". Aeon. 12 February 2021. Retrieved 2023-05-28.
  15. ^ Walsh, Bryan (July 15, 2020). "The dire lessons of the first nuclear bomb test". Axios.
  16. ^ Torres, Phil (2019-10-21). "Omniviolence Is Coming and the World Isn't Ready". Nautilus. Retrieved 2023-05-29.
  17. ^ "AI-Powered Malware Holds Potential For Extreme Consequences - Could Artificial Intelligence Be a Black Ball From the Urn of Creativity?". Zvelo. 2023-04-26. Retrieved 2023-11-07.
[edit]

The Vulnerable World Hypothesis

Category:Existential risk