Jump to content

P(doom)

From Wikipedia, the free encyclopedia

P(doom) is a term in AI safety that refers to the probability of existentially catastrophic outcomes (or "doom") as a result of artificial intelligence.[1][2] The exact outcomes in question differ from one prediction to another, but generally allude to the existential risk from artificial general intelligence.[3]

Originating as an inside joke among AI researchers, the term came to prominence in 2023 following the release of GPT-4, as high-profile figures such as Geoffrey Hinton[4] and Yoshua Bengio[5] began to warn of the risks of AI.[6] In a 2023 survey, AI researchers were asked to estimate the probability that future AI advancements could lead to human extinction or similarly severe and permanent disempowerment within the next 100 years. The mean value from the responses was 14.4%, with a median value of 5%.[7][8]

Sample P(doom) values

[edit]
Name P(doom) Notes
Elon Musk 10–20%[9] Businessman and CEO of X, Tesla, and SpaceX
Vitalik Buterin 10%[1] Cofounder of Ethereum
Marc Andreessen 0%[10] American businessman
Geoffrey Hinton 10% to >50%[6][11][Note 1] "Godfather of AI" and 2024 Nobel Prize winner in Physics
Yann Le Cun <0.01%[12][Note 2] Chief AI Scientist at Meta
Demis Hassabis 0-25%[13][14] Co-founder and CEO of Google DeepMind and Isomorphic Labs
Nate Silver 5-10%[15] Statistician, founder of FiveThirtyEight
Lina Khan 15%[6] Chair of the Federal Trade Commission
Eliezer Yudkowsky 95%+ [1] Founder of the Machine Intelligence Research Institute
Yoshua Bengio 20%[3][Note 3] Computer scientist and scientific director of the Montreal Institute for Learning Algorithms
Emmet Shear 5–50%[6] Co-founder of Twitch and former interim CEO of OpenAI
Dario Amodei 10–25%[6] CEO of Anthropic
Emad Mostaque 50%[16] Co-founder of Stability AI
Grady Booch 000% c. 0%[1][Note 4] American software engineer
Casey Newton 5%[1] American technology journalist
Toby Ord 10%[17] Australian philosopher and author of The Precipice
Paul Christiano 50%[18] Head of research at the US AI Safety Institute
Zvi Mowshowitz 60%[19] Writer on artificial intelligence, former competitive Magic: The Gathering player
Dan Hendrycks 80%+ [1][Note 5] Director of Center for AI Safety
Roman Yampolskiy 99.9%[20][Note 6] Latvian computer scientist
Jan Leike 10–90%[1] AI alignment researcher at Anthropic, formerly of DeepMind and OpenAI

Criticism

[edit]

There has been some debate about the usefulness of P(doom) as a term, in part due to the lack of clarity about whether or not a given prediction is conditional on the existence of artificial general intelligence, the time frame, and the precise meaning of "doom".[6][21]

[edit]

See also

[edit]

Notes

[edit]
  1. ^ Conditional on A.I. not being "strongly regulated", time frame of 30 years.
  2. ^ "Less likely than an asteroid wiping us out".
  3. ^ Based on an estimated "50 per cent probability that AI would reach human-level capabilities within a decade, and a greater than 50 per cent likelihood that AI or humans themselves would turn the technology against humanity at scale."
  4. ^ Equivalent to "P(all the oxygen in my room spontaneously moving to a corner thereby suffocating me)".
  5. ^ Up from ~20% 2 years prior.
  6. ^ Within the next 100 years.

References

[edit]
  1. ^ a b c d e f g Railey, Clint (2023-07-12). "P(doom) is AI's latest apocalypse metric. Here's how to calculate your score". Fast Company.
  2. ^ Thomas, Sean (2024-03-04). "Are we ready for P(doom)?". The Spectator. Retrieved 2024-06-19.
  3. ^ a b "It started as a dark in-joke. It could also be one of the most important questions facing humanity". ABC News. 2023-07-14. Retrieved 2024-06-18.
  4. ^ Metz, Cade (2023-05-01). "'The Godfather of A.I.' Leaves Google and Warns of Danger Ahead". The New York Times. ISSN 0362-4331. Retrieved 2024-06-19.
  5. ^ "One of the "godfathers of AI" airs his concerns". The Economist. ISSN 0013-0613. Retrieved 2024-06-19.
  6. ^ a b c d e f Roose, Kevin (2023-12-06). "Silicon Valley Confronts a Grim New A.I. Metric". The New York Times. ISSN 0362-4331. Retrieved 2024-06-17.
  7. ^ Piper, Kelsey (2024-01-10). "Thousands of AI experts are torn about what they've created, new study finds". Vox. Retrieved 2024-09-02.
  8. ^ "2023 Expert Survey on Progress in AI [AI Impacts Wiki]". wiki.aiimpacts.org. Retrieved 2024-09-02.
  9. ^ Tangalakis-Lippert, Katherine. "Elon Musk says there could be a 20% chance AI destroys humanity — but we should do it anyway". Business Insider. Retrieved 2024-06-19.
  10. ^ Marantz, Andrew (2024-03-11). "Among the A.I. Doomsayers". The New Yorker. ISSN 0028-792X. Retrieved 2024-06-19.
  11. ^ METR (Model Evaluation & Threat Research) (2024-06-27). Q&A with Geoffrey Hinton. Retrieved 2025-02-07 – via YouTube.
  12. ^ Wayne Williams (2024-04-07). "Top AI researcher says AI will end humanity and we should stop developing it now — but don't worry, Elon Musk disagrees". TechRadar. Retrieved 2024-06-19.
  13. ^ "Demis Hassabis on Chatbots to AGI | EP 71". YouTube. 23 February 2024. Retrieved 8 October 2024.
  14. ^ "Unreasonably Effective AI with Demis Hassabis". YouTube. 14 August 2024. Retrieved 30 January 2025.
  15. ^ "It's time to come to grips with AI". Silver Bulletin. 2025-01-27. Retrieved 2025-02-03.
  16. ^ "Emad (@EMostaque) on X". X (formerly Twitter). 2024-12-04. Retrieved 2024-12-04.
  17. ^ "Is there really a 1 in 6 chance of human extinction this century?". ABC News. 2023-10-08. Retrieved 2024-09-01.
  18. ^ "ChatGPT creator says there's 50% chance AI ends in 'doom'". The Independent. 2023-05-03. Retrieved 2024-06-19.
  19. ^ Jaffee, Theo (2023-08-18). "Zvi Mowshowitz - Rationality, Writing, Public Policy, and AI". YouTube. Retrieved 2025-02-03.
  20. ^ Altchek, Ana. "Why this AI researcher thinks there's a 99.9% chance AI wipes us out". Business Insider. Retrieved 2024-06-18.
  21. ^ King, Isaac (2024-01-01). "Stop talking about p(doom)". LessWrong.
  22. ^ "GUM & Ambrose Kenny-Smith are teaming up again for new collaborative album 'III Times'". DIY. 2024-05-07. Retrieved 2024-06-19.