Dan Hendrycks
Dan Hendrycks | |
---|---|
Born | 1994 or 1995 (age 29–30) |
Education | University of Chicago (B.S., 2018) UC Berkeley (Ph.D., 2022) |
Scientific career | |
Fields | |
Institutions | UC Berkeley Center for AI Safety |
Dan Hendrycks (born 1994 or 1995[1]) is an American machine learning researcher. He serves as the director of the Center for AI Safety.
Early life and education
[edit]Hendrycks was raised in a Christian evangelical household in Marshfield, Missouri.[2][3] He received a B.S. from the University of Chicago in 2018 and a Ph.D. from the University of California, Berkeley in Computer Science in 2022.[4]
Career and research
[edit]Hendrycks' research focuses on topics that include machine learning safety, machine ethics, and robustness.
He credits his participation in the effective altruism (EA) movement-linked 80,000 Hours program for his career focus towards AI safety, though denied being an advocate for EA.[2]
In February 2022, Hendrycks co-authored recommendations for the US National Institute of Standards and Technology (NIST) to inform the management of risks from artificial intelligence.[5][6]
In September 2022, Hendrycks wrote a paper providing a framework for analyzing the impact of AI research on societal risks.[7][8] He later published a paper in March 2023 examining how natural selection and competitive pressures could shape the goals of artificial agents.[9][10][11] This was followed by "An Overview of Catastrophic AI Risks", which discusses four categories of risks: malicious use, AI race dynamics, organizational risks, and rogue AI agents.[12][13]
Hendrycks is the safety adviser of xAI, an AI startup company founded by Elon Musk in 2023. To avoid any potential conflicts of interest, he receives a symbolic one-dollar salary and holds no company equity.[1][14] As of November 2024, he is also an advisor at Scale AI.[15]
In 2024 Hendrycks published a 568 page book entitled "Introduction to AI Safety, Ethics, and Society" based on courseware he had previously developed.[16]
Selected publications
[edit]- Hendrycks, Dan; Gimpel, Kevin (2020-07-08). "Gaussian Error Linear Units (GELUs)". arXiv:1606.08415 [cs.LG].
- Hendrycks, Dan; Gimpel, Kevin (2018-10-03). "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks". International Conference on Learning Representations 2017. arXiv:1610.02136.
- Hendrycks, Dan; Mazeika, Mantas; Dietterich, Thomas (2019-01-28). "Deep Anomaly Detection with Outlier Exposure". International Conference on Learning Representations 2019. arXiv:1812.04606.
- Hendrycks, Dan; Mazeika, Mantas; Zou, Andy (2021-10-25). "What Would Jiminy Cricket Do? Towards Agents That Behave Morally". Conference on Neural Information Processing Systems 2021. arXiv:2110.13136.
References
[edit]- ^ a b Henshall, Will (September 7, 2023). "Time 100 AI: Dan Hendrycks". Time.
- ^ a b Scharfenberg, David (July 6, 2023). "Dan Hendrycks wants to save us from an AI catastrophe. He's not sure he'll succeed". The Boston Globe. Archived from the original on July 8, 2023.
- ^ Castaldo, Joe (June 23, 2023). "'I hope I'm wrong': Why some experts see doom in AI". The Globe and Mail.
- ^ "Dan Hendrycks". people.eecs.berkeley.edu. Retrieved 2023-04-14.
- ^ "Nvidia moves into A.I. services and ChatGPT can now use your credit card". Fortune. Retrieved 2023-04-13.
- ^ "Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan: Responses" (PDF). National Artificial Intelligence Initiative. March 2022.
- ^ Hendrycks, Dan; Mazeika, Mantas (2022-06-13). "X-Risk Analysis for AI Research". arXiv:2206.05862v7 [cs.CY].
- ^ Gendron, Will. "An AI safety expert outlined a range of speculative doomsday scenarios, from weaponization to power-seeking behavior". Business Insider. Retrieved 2023-05-07.
- ^ Hendrycks, Dan (2023-03-28). "Natural Selection Favors AIs over Humans". arXiv:2303.16200 [cs.CY].
- ^ Colton, Emma (2023-04-03). "AI could go 'Terminator,' gain upper hand over humans in Darwinian rules of evolution, report warns". Fox News. Retrieved 2023-04-14.
- ^ Klein, Ezra (2023-04-07). "Why A.I. Might Not Take Your Job or Supercharge the Economy". The New York Times. Retrieved 2023-04-14.
- ^ Hendrycks, Dan; Mazeika, Mantas; Woodside, Thomas (2023). "An Overview of Catastrophic AI Risks". arXiv:2306.12001 [cs.CY].
- ^ Scharfenberg, David (July 6, 2023). "Dan Hendrycks wants to save us from an AI catastrophe. He's not sure he'll succeed". The Boston Globe. Retrieved July 10, 2023.
- ^ Lovely, Garrison (January 22, 2024). "Can Humanity Survive AI?". Jacobin.
- ^ Goldman, Sharon (2024-11-14). "Elon Musk's xAI safety whisperer just became an advisor to Scale AI". Fortune. Retrieved 2024-11-14.
- ^ "AI Safety, Ethics, and Society Textbook". www.aisafetybook.com. Retrieved 9 May 2024.