Future of Life Institute

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Coordinates: 42°22′25″N 71°06′35″W / 42.3736158°N 71.1097335°W / 42.3736158; -71.1097335

Future of Life Institute
Logo of the Future of Life Institute
AbbreviationFLI
Founders
TypeIndependent non-profit
47-1052538
Legal statusActive
PurposeReducing extreme, large-scale risks from transformative technologies, as well as steering the development and use of these technologies to benefit life.
Location
President
Max Tegmark
Websitefutureoflife.org

The Future of Life Institute (FLI) is an independent nonprofit that works to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit life. The Institute has primarily focused on risks from artificial general intelligence. Its work mostly consists of grantmaking, educational outreach, and policy advocacy within the U.S. government, European Union institutions, and United Nations. The organisation also runs conferences and contests.[1]

Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, and its board of advisors includes entrepreneur Elon Musk.

Mission[edit]

Max Tegmark, professor at MIT, one of the founders and current president of the Future of Life Institute

FLI's mission is to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit life. FLI has for the most part focused on the potential risks to humanity from the development of human-level or superintelligent artificial general intelligence (AGI) but also works on risks from biotechnology, nuclear weapons and climate change.[2]

Key people[edit]

The Institute was founded in March 2014 by MIT physicist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Tufts University postdoctoral scholar Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. The Institute's 14-person Scientific Advisory Board includes computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, astrophysicist Sandra Faber, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman (as well as cosmologist Stephen Hawking prior to his death in 2018).[3][4][5]

Grantmaking[edit]

Following an initial donation of $10 million from Elon Musk, the Future of Life Institute launched its first round of grantmaking in 2015 with a focus on funding AI safety research.[6][7][8] A total of $6.5 million was awarded to 37 researchers.[9] In 2018, FLI launched its second round of grantmaking, this time with a focus on funding on AGI safety research. $2 million was awarded to 10 researchers.[10] In July 2021, FLI announced that it would launch a multi-year $25 million grant program with funding from the Russian-Canadian programmer Vitalik Buterin.[11] The focus of this program is reducing existential risk. FLI has so far invited applications for PhD and Postdoctoral Fellowships in AI Existential Safety and plans to launch similar fellowships in the areas of policy/governance and behavioural sciences. [12]

Events & Conferences[edit]

In 2014, the Future of Life Institute held its opening event at MIT: a panel discussion on "The Future of Technology: Benefits and Risks", moderated by Alan Alda.[13][14] The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn.[15][16]

Since 2015, FLI has organised regular conferences that bring together leading AI builders from academia and industry. To date, the following conferences have taken place:

Press Coverage[edit]

  • "The EU needs to protect (more) against AI Manipulation" in Euractiv.[25]
  • "“Gelet op de gevaren verdient de digitale agenda in Den Haag meer ambitie” in Trouw.[26]
  • "Fliegende Roboterwaffen töten schon jetzt – und niemand kontrolliert sie" in Der Spiegel Ausland.[27]
  • "Slaughterbots are a go" in Politico Europe.[28]
  • "US official rejects plea to ban ‘killer robots’” in The Hill.[29]
  • "The Rise of Killer Robots. Can they be trusted?” in The Times.[30]
  • "The Fight to Define When AI is 'High-Risk'" in Wired. [31]
  • "Existential AI Risks" in Politico Europe.[32]
  • "“The Third Revolution in Warfare” in The Atlantic.[33]
  • "Lethal Autonomous Weapons exist; They Must Be Banned" in IEEE Spectrum.[34]
  • "United States and Allies Protest U.N. Talks to Ban Nuclear Weapons" in The New York Times.[35]
  • "Is Artificial Intelligence a Threat?" in The Chronicle of Higher Education, including interviews with FLI founders Max Tegmark, Jaan Tallinn and Viktoriya Krakovna.[2]
  • "But What Would the End of Humanity Mean for Me?", an interview with Max Tegmark on the ideas behind FLI in The Atlantic.[3]
  • Michael del Castillo (15 January 2015). "Startup branding doesn't hide apocalyptic undertones of letter signed by Elon Musk". Upstart Business Journal.

See also[edit]

References[edit]

  1. ^ "About the Future of Life Institute". LinkedIn. Retrieved 1 March 2022.
  2. ^ a b Chen, Angela (11 September 2014). "Is Artificial Intelligence a Threat?". Chronicle of Higher Education. Retrieved 18 Sep 2014.
  3. ^ a b "But What Would the End of Humanity Mean for Me?". The Atlantic. 9 May 2014. Retrieved 13 April 2020.
  4. ^ "Who we are". Future of Life Institute. Retrieved 13 April 2020.
  5. ^ "Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world". Salon. 5 October 2014. Retrieved 13 April 2020.
  6. ^ "Elon Musk donates $10M to keep AI beneficial". Future of Life Institute. 15 January 2015.
  7. ^ "Elon Musk donates $10M to Artificial Intelligence research". SlashGear. 15 January 2015.
  8. ^ "Elon Musk is Donating $10M of his own Money to Artificial Intelligence Research". Fast Company. 15 January 2015.
  9. ^ "New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial". Future of Life Institute.
  10. ^ "AI Safety Research". Future of Life Institute. Retrieved 2022-03-01.
  11. ^ "FLI announces $25M grants program for existential risk reduction". Future of Life Institute. 2 July 2021.
  12. ^ "Grant Programs". Future of Life Institute. Retrieved 2022-03-01.
  13. ^ "The Future of Technology: Benefits and Risks". Future of Life Institute. 24 May 2014.
  14. ^ "Machine Intelligence Research Institute - June 2014 Newsletter". 2 June 2014. Retrieved 19 June 2014.
  15. ^ "FHI News: 'Future of Life Institute hosts opening event at MIT'". Future of Humanity Institute. 20 May 2014. Retrieved 19 June 2014.
  16. ^ "The Future of Technology: Benefits and Risks". Personal Genetics Education Project. 9 May 2014. Retrieved 19 June 2014.
  17. ^ "AI safety conference in Puerto Rico". Future of Life Institute. Retrieved 19 January 2015.
  18. ^ "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter". Future of Life Institute.
  19. ^ "Beneficial AI 2017". Future of Life Institute.
  20. ^ Metz, Cade (June 9, 2018). "Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots". NYT. Retrieved June 10, 2018. The private gathering at the Asilomar Hotel was organized by the Future of Life Institute, a think tank built to discuss the existential risks of A.I. and other technologies.
  21. ^ "Asilomar AI Principles". Future of Life Institute.
  22. ^ "Asilomar Principles" (PDF). OECD.
  23. ^ "Beneficial AGI 2019". Future of Life Institute.
  24. ^ "CSER at the Beneficial AGI 2019 Conference". Center for the Study of Existential Risk.
  25. ^ Uuk, Risto (2022-02-02). "The EU needs to protect (more) against AI manipulation". www.euractiv.com. Retrieved 2022-03-01.
  26. ^ Brakel, Otto Barten en Mark (2022-01-17). "Gelet op de gevaren verdient de digitale agenda in Den Haag meer ambitie". Trouw (in Dutch). Retrieved 2022-03-01.
  27. ^ Kalisch, Muriel (2022-01-04). "(S+) Autonome Kriegsmaschinen: Fliegende Roboterwaffen töten schon jetzt – und niemand kontrolliert sie (S+)". Der Spiegel (in German). ISSN 2195-1349. Retrieved 2022-03-01.
  28. ^ "AI policy in 2022 — Slaughterbots are go — COVID-19 has exposed AI's flaws in hospital". POLITICO. 2022-01-05. Retrieved 2022-03-01.
  29. ^ Barnes, Adam (2021-12-03). "US official rejects plea to ban 'killer robots'". TheHill. Retrieved 2022-03-01.
  30. ^ Campbell, Matthew. "The rise of killer robots — can they be trusted?". ISSN 0140-0460. Retrieved 2022-03-01.
  31. ^ Khari Johnson (September 1, 2021). "The Fight to Define When AI is 'High-Risk'". Wired.
  32. ^ "POLITICO AI: Decoded: Existential AI risks — Transatlantic standards — Would you lie to me GPT-3?". POLITICO. 2021-09-22. Retrieved 2022-03-01.
  33. ^ Lee, Kai-Fu (2021-09-11). "The Third Revolution in Warfare". The Atlantic. Retrieved 2022-03-01.
  34. ^ Stuart Russell; Anthony Aguirre (June 16, 2021). "Lethal Autonomous Weapons Exist; They Must Be Banned". IEEE Spectrum.
  35. ^ Somini Sengupta; Rick Gladstone (March 27, 2017). "United States and Allies Protest U.N. Talks to Ban Nuclear Weapons". New York Times.

External links[edit]