Future of Life Institute
42°22′25″N 71°06′35″W / 42.3736158°N 71.1097335°W
Abbreviation | FLI |
---|---|
Formation | March 2014 |
Founders |
|
Type | Non-profit research institute |
Purpose | Reduction of existential risk, particularly from advanced artificial intelligence |
Location |
|
President | Max Tegmark |
Endowment | $665.8 million (in 2021)[1] |
Website | futureoflife.org |
The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI's work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.
The founders of the Institute include MIT cosmologist Max Tegmark, UCSC cosmologist Anthony Aguirre, and Skype co-founder Jaan Tallinn; among the Institute's advisors is entrepreneur Elon Musk.
Purpose
[edit]FLI's stated mission is to steer transformative technology towards benefiting life and away from large-scale risks.[2] FLI's philosophy focuses on the potential risk to humanity from the development of human-level or superintelligent artificial general intelligence (AGI), but also works to mitigate risk from biotechnology, nuclear weapons and global warming.[3]
History
[edit]FLI was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Tufts University postdoctoral scholar Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. The Institute's advisors include computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, astrophysicist Sandra Faber, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman (as well as cosmologist Stephen Hawking prior to his death in 2018).[4][5][6]
Starting in 2017, FLI has offered an annual "Future of Life Award", with the first awardee being Vasili Arkhipov. The same year, FLI released Slaughterbots, a short arms-control advocacy film. FLI released a sequel in 2021.[7]
In 2018, FLI drafted a letter calling for "laws against lethal autonomous weapons". Signatories included Elon Musk, Demis Hassabis, Shane Legg, and Mustafa Suleyman.[8]
In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper.[9][10] In response, Tegmark said that the institute had only become aware of Nya Dagbladet's positions during due diligence processes a few months after the grant was initially offered, and that the grant had been immediately revoked.[10]
Open letter on an AI pause
[edit]In March 2023, FLI published a letter titled "Pause Giant AI Experiments: An Open Letter". This called on major AI developers to agree on a verifiable six-month pause of any systems "more powerful than GPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter said: "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one - not even their creators - can understand, predict, or reliably control".[11] The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.[12][13]
Prominent signatories of the letter included Elon Musk, Steve Wozniak, Evan Sharp, Chris Larsen, and Gary Marcus; AI lab CEOs Connor Leahy and Emad Mostaque; politician Andrew Yang; deep-learning researcher Yoshua Bengio; and Yuval Noah Harari.[14] Marcus stated "the letter isn't perfect, but the spirit is right." Mostaque stated, "I don't think a six month pause is the best idea or agree with everything but there are some interesting things in that letter." In contrast, Bengio explicitly endorsed the six-month pause in a press conference.[15][16] Musk predicted that "Leading AGI developers will not heed this warning, but at least it was said."[17] Some signatories, including Musk, said they were motivated by fears of existential risk from artificial general intelligence.[18] Some of the other signatories, such as Marcus, instead said they signed out of concern about risks such as AI-generated propaganda.[19]
The authors of one of the papers cited in FLI's letter, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?"[20] including Emily M. Bender, Timnit Gebru, and Margaret Mitchell, criticised the letter.[21] Mitchell said that “by treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.”[21]
Operations
[edit]Advocacy
[edit]FLI has actively contributed to policymaking on AI. In October 2023, for example, U.S. Senate majority leader Chuck Schumer invited FLI to share its perspective on AI regulation with selected senators.[22] In Europe, FLI successfully advocated for the inclusion of more general AI systems, such as GPT-4, in the EU's Artificial Intelligence Act.[23]
In military policy, FLI coordinated the support of the scientific community for the Treaty on the Prohibition of Nuclear Weapons.[24] At the UN and elsewhere, the Institute has also advocated for a treaty on autonomous weapons.[25][26]
Research grants
[edit]The FLI research program started in 2015 with an initial donation of $10 million from Elon Musk.[27][28][29] In this initial round, a total of $7 million was awarded to 37 research projects.[30] In July 2021, FLI announced that it would launch a new $25 million grant program with funding from the Russian–Canadian programmer Vitalik Buterin.[31]
Conferences
[edit]In 2014, the Future of Life Institute held its opening event at MIT: a panel discussion on "The Future of Technology: Benefits and Risks", moderated by Alan Alda.[32][33] The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn.[34][35]
Since 2015, FLI has organised biannual conferences with the stated purpose of bringing together AI researchers from academia and industry. As of April 2023[update], the following conferences have taken place:
- "The Future of AI: Opportunities and Challenges" conference in Puerto Rico (2015). The stated goal was to identify promising research directions that could help maximize the future benefits of AI.[36] At the conference, FLI circulated an open letter on AI safety which was subsequently signed by Stephen Hawking, Elon Musk, and many artificial intelligence researchers.[37]
- The Beneficial AI conference in Asilomar, California (2017),[38] a private gathering of what The New York Times called "heavy hitters of A.I." (including Yann LeCun, Elon Musk, and Nick Bostrom).[39] The institute released a set of principles for responsible AI development that came out of the discussion at the conference, signed by Yoshua Bengio, Yann LeCun, and many other AI researchers.[40] These principles may have influenced the regulation of artificial intelligence and subsequent initiatives, such as the OECD Principles on Artificial Intelligence.[41]
- The beneficial AGI conference in Puerto Rico (2019).[42] The stated focus of the meeting was answering long-term questions with the goal of ensuring that artificial general intelligence is beneficial to humanity.[43]
In the media
[edit]- "The Fight to Define When AI is 'High-Risk'" in Wired.
- "Lethal Autonomous Weapons exist; They Must Be Banned" in IEEE Spectrum.
- "United States and Allies Protest U.N. Talks to Ban Nuclear Weapons" in The New York Times.
- "Is Artificial Intelligence a Threat?" in The Chronicle of Higher Education, including interviews with FLI founders Max Tegmark, Jaan Tallinn and Viktoriya Krakovna.
- "But What Would the End of Humanity Mean for Me?", an interview with Max Tegmark on the ideas behind FLI in The Atlantic.
See also
[edit]- Future of Humanity Institute
- Centre for the Study of Existential Risk
- Global catastrophic risk
- Leverhulme Centre for the Future of Intelligence
- Machine Intelligence Research Institute
- The Precipice: Existential Risk and the Future of Humanity
References
[edit]- ^ "Future of Life Institute received $665 million". Philanthropy News Digest. Retrieved 13 December 2024.
- ^ "Future of Life Institute homepage". Future of Life Institute. 9 September 2021. Archived from the original on 8 September 2021. Retrieved 9 September 2021.
- ^ Chen, Angela (11 September 2014). "Is Artificial Intelligence a Threat?". Chronicle of Higher Education. Archived from the original on 22 December 2016. Retrieved 18 Sep 2014.
- ^ "But What Would the End of Humanity Mean for Me?". The Atlantic. 9 May 2014. Archived from the original on 4 June 2014. Retrieved 13 April 2020.
- ^ "Who we are". Future of Life Institute. Archived from the original on 6 April 2020. Retrieved 13 April 2020.
- ^ "Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world". Salon. 5 October 2014. Archived from the original on 18 March 2021. Retrieved 13 April 2020.
- ^ Walsh, Bryan (20 October 2022). "The physicist Max Tegmark works to ensure that life has a future". Vox. Archived from the original on 31 March 2023. Retrieved 31 March 2023.
- ^ "AI Innovators Take Pledge Against Autonomous Killer Weapons". NPR. 2018. Archived from the original on 31 March 2023. Retrieved 31 March 2023.
- ^ Dalsbro, Anders; Leman, Jonathan (2023-01-13). "Elon Musk-funded nonprofit run by MIT professor offered to finance Swedish pro-nazi group". Expo. Archived from the original on 2023-06-25. Retrieved 2023-08-17.
- ^ a b Hume, Tim (2023-01-19). "Elon Musk-Backed Non-Profit Offered $100K Grant to 'Pro-Nazi' Media Outlet". Vice. Archived from the original on 2023-06-23. Retrieved 2023-08-17.
- ^ "Elon Musk among experts urging a halt to AI training". BBC News. 2023-03-29. Archived from the original on 2023-04-01. Retrieved 2023-04-01.
- ^ "Elon Musk and other tech leaders call for pause in 'out of control' AI race". CNN. 29 March 2023. Archived from the original on 10 April 2023. Retrieved 30 March 2023.
- ^ "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Archived from the original on 27 March 2023. Retrieved 30 March 2023.
- ^ Ball, James (2023-04-02). "We're in an AI race, banning it would be foolish". The Sunday Times. Archived from the original on 2023-08-19. Retrieved 2023-04-02.
- ^ "Musk and Wozniak among 1,100+ signing open letter calling for 6-month ban on creating powerful A.I." Fortune. March 2023. Archived from the original on 29 March 2023. Retrieved 30 March 2023.
- ^ "The Open Letter to Stop 'Dangerous' AI Race Is a Huge Mess". www.vice.com. March 2023. Archived from the original on 30 March 2023. Retrieved 30 March 2023.
- ^ "Elon Musk". Twitter. Archived from the original on 30 March 2023. Retrieved 30 March 2023.
- ^ Rosenberg, Scott (30 March 2023). "Open letter sparks debate over "pausing" AI research over risks". Axios. Archived from the original on 31 March 2023. Retrieved 31 March 2023.
- ^ "Tech leaders urge a pause in the 'out-of-control' artificial intelligence race". NPR. 2023. Archived from the original on 29 March 2023. Retrieved 30 March 2023.
- ^ Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-03). "On the Dangers of Stochastic Parrots: Can Language Models be Too Big?". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Virtual Event Canada: ACM. pp. 610–623. doi:10.1145/3442188.3445922. ISBN 978-1-4503-8309-7.
- ^ a b Kari, Paul (2023-04-01). "Letter signed by Elon Musk demanding AI research pause sparks controversy". The Guardian. Archived from the original on 2023-04-01. Retrieved 2023-04-01.
- ^ Krishan, Nihal (2023-10-26). "Sen. Chuck Schumer's second AI Insight Forum covers increased R&D funding, immigration challenges and safeguards". FedScoop. Retrieved 2024-03-16.
- ^ "EU artificial intelligence act not 'futureproof', experts warn MEPs". Science|Business. Retrieved 2024-03-16.
- ^ Scientists Support a Nuclear Ban, 16 June 2017, retrieved 2024-03-16
- ^ "Educating about Lethal Autonomous Weapons". Future of Life Institute. Retrieved 2024-03-16.
- ^ Government of Costa Rica (February 24, 2023). "FLI address" (PDF). Latin American and the Caribbean conference on the social and humanitarian impact of autonomous weapons.
- ^ "Elon Musk donates $10M to keep AI beneficial". Future of Life Institute. 15 January 2015. Archived from the original on 28 February 2018. Retrieved 28 July 2019.
- ^ "Elon Musk donates $10M to Artificial Intelligence research". SlashGear. 15 January 2015. Archived from the original on 7 April 2015. Retrieved 26 April 2015.
- ^ "Elon Musk is Donating $10M of his own Money to Artificial Intelligence Research". Fast Company. 15 January 2015. Archived from the original on 30 October 2015. Retrieved 19 January 2015.
- ^ "New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial". Future of Life Institute. 28 October 2015. Archived from the original on 28 July 2019. Retrieved 28 July 2019.
- ^ "FLI announces $25M grants program for existential risk reduction". Future of Life Institute. 2 July 2021. Archived from the original on 9 September 2021. Retrieved 9 September 2021.
- ^ "The Future of Technology: Benefits and Risks". Future of Life Institute. 24 May 2014. Archived from the original on 28 July 2019. Retrieved 28 July 2019.
- ^ "FHI News: 'Future of Life Institute hosts opening event at MIT'". Future of Humanity Institute. 20 May 2014. Archived from the original on 27 July 2014. Retrieved 19 June 2014.
- ^ "The Future of Technology: Benefits and Risks". Personal Genetics Education Project. 9 May 2014. Archived from the original on 22 December 2015. Retrieved 19 June 2014.
- ^ "AI safety conference in Puerto Rico". Future of Life Institute. Archived from the original on 7 November 2015. Retrieved 19 January 2015.
- ^ "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". Future of Life Institute. Archived from the original on 2019-08-10. Retrieved 2019-07-28.
- ^ "Beneficial AI 2017". Future of Life Institute. Archived from the original on 2020-02-24. Retrieved 2019-07-28.
- ^ Metz, Cade (June 9, 2018). "Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots". NYT. Archived from the original on February 15, 2021. Retrieved June 10, 2018.
The private gathering at the Asilomar Hotel was organized by the Future of Life Institute, a think tank built to discuss the existential risks of A.I. and other technologies.
- ^ "Asilomar AI Principles". Future of Life Institute. Archived from the original on 2017-12-11. Retrieved 2019-07-28.
- ^ "Asilomar Principles" (PDF). OECD. Archived (PDF) from the original on 2021-09-09. Retrieved 2021-09-09.
- ^ "Beneficial AGI 2019". Future of Life Institute. Archived from the original on 2019-07-28. Retrieved 2019-07-28.
- ^ "CSER at the Beneficial AGI 2019 Conference". Center for the Study of Existential Risk. Archived from the original on 2019-07-28. Retrieved 2019-07-28.
External links
[edit]- Futures studies organizations
- 2014 establishments in Massachusetts
- Research institutes established in 2014
- Artificial intelligence associations
- Transhumanist organizations
- Existential risk organizations
- Existential risk from artificial general intelligence
- Organizations associated with effective altruism
- Regulation of artificial intelligence