Future of Life Institute

Summary

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI’s work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

The founders of the Institute include MIT cosmologist Max Tegmark, UCSC cosmologist Anthony Aguirre, and Skype co-founder Jaan Tallinn; among the Institute’s advisors is entrepreneur Elon Musk.

OnAir Post: Future of Life Institute

News

Are we close to an intelligence explosion?
Future of Life Institute , Sarah Hastings-WoodhouseMarch 21, 2025

AIs are inching ever-closer to a critical threshold. Beyond this threshold lie great risks—but crossing it is not inevitable.

Intelligence explosion, singularity, fast takeoff… these are a few of the terms given to the surpassing of human intelligence by machine intelligence, likely to be one of the most consequential – and unpredictable – events in our history.

For many decades, scientists have predicted that artificial intelligence will eventually enter a phase of recursive self-improvement, giving rise to systems beyond human comprehension, and a period of extremely rapid technological growth. The product of an intelligence explosion would be not just Artificial General Intelligence (AGI) – a system about as capable as a human across a wide range of domains – but a superintelligence, a system that far surpasses our cognitive abilities.

Speculation is now growing within the tech industry that an intelligence explosion may be just around the corner. Sam Altman, CEO of OpenAI, kicked off the new year with a blog post entitled Reflections, in which he claimed: “We are now confident we know how to build AGI as we have traditionally understood it… We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word”. A researcher at that same company referred to controlling superintelligence as a “short term research agenda”. Another’s antidote to online hype surrounding recent AI breakthroughs was far from an assurance that the singularity is many years or decades away: “We have not yet achieved superintelligence”.

Future of Life Institute Newsletter: Meet PERCEY
Future of Life Institute Media, Maggie MunroMarch 5, 2025

Introducing our new AI awareness companion; notes from the AI Action Summit and IASEAI; a new short film on AI replacing human labour; and more!

Today, we’re thrilled to launch ‘PERCEY Made Me‘: an innovative AI awareness campaign, with an interactive web app at its centre. It’s an AI-based chatbot built to engage people and spread awareness of AI’s current abilities to persuade and influence people, in just a few minutes.

Voiced by the legendary Stephen Fry, PERCEY is your personal guide to navigating the rapidly evolving world of artificial intelligence. With AI threatening to reshape our lives at lightning speed, PERCEY offers a unique, approachable way to:

  • Assess your personal AI risk awareness
  • Challenge and explore your assumptions about AI and AGI
  • Gain insights into AI’s potential impact on your future

Whether you’re a tech enthusiast, cautious observer, or simply curious about the AI landscape, PERCEY provides a refreshing, humour-infused approach to help counter the reckless narratives Big Tech companies are pushing.

Chat with PERCEY now, and please share widely! You can find PERCEY on XBlueSky, and Instagram at @PERCEYMadeMe.

 

i
Special: Defeating AI Defenses: Podcast
Future of Life Institute , Nicholas Carlini and Nathan LabenzMarch 21, 2025

In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.

00:00 Nicholas Carlini’s contributions to cybersecurity

08:19 Understanding attack strategies

29:39 High-dimensional spaces and attack intuitions

51:00 Challenges in open-source model safety

01:00:11 Unlearning and fact editing in models

01:10:55 Adversarial examples and human robustness

01:37:03 Cryptography and AI robustness

01:55:51 Scaling AI security research

About

Mission

What is the Future of Life Institute, and where did it come from?

Our mission
Preserving the future of life
How certain technologies are developed and used has far-reaching consequences for all life on earth. This is currently the case for artificial intelligence, biotechnologies and nuclear technology.

If properly managed, these technologies could transform the world in a way that makes life substantially better, both for the people alive today and for all the people who have yet to be born. They could be used to treat and eradicate diseases, strengthen democratic processes, and transform education.

If improperly managed, they could do the opposite. They could produce catastrophic events that bring humanity to its knees, perhaps even pushing us to the brink of extinction.

The Future of Life Institute’s mission is to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.

Learn more

Source: Website

Focus Area: Artificial Intelligence

From recommender algorithms to chatbots to self-driving cars, AI is changing our lives. As the impact of this technology grows, so will the risks.

SPOTLIGHT
We must not build AI to replace humans.

A new essay by Anthony Aguirre, Executive Director of the Future of Life Institute

Humanity is on the brink of developing artificial general intelligence that exceeds our own. It’s time to close the gates on AGI and superintelligence… before we lose control of our future.

Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn’t have to.
Learn how we can keep the future human and experience the extraordinary benefits of Tool AI…

View the site

Source: Website

Focus Area: Biotechnology

From the accidental release of engineered pathogens to the backfiring of a gene-editing experiment, the dangers from biotechnology are too great for us to proceed blindly.

Modern biotechnology refers to powerful tools kickstarted in the 20th century which can shape and repurpose the properties of living cells, plants and animals. These tools include DNA sequencing, synthetic biology, recombined or ‘recombinant’ DNA, and genome editing.

Learn more 

Source: Website

Focus Area: Nuclear Weapons

Almost eighty years after their introduction, the risks posed by nuclear weapons are as high as ever – and new research reveals that the impacts are even worse than previously reckoned.

There are an estimated 13,000 nuclear weapons in the world, distributed unevenly among nine states. Some of them are hundreds of times more powerful than those which destroyed Hiroshima and Nagasaki. The use of just a few hundred could leave Earth’s population decimated by a nuclear winter.

Learn more 

Source: Website

Contact

Email: General inquiries, Policy, Futures, Grants, Press

Web Links

Videos

How might AI be weaponized? | Al, Social Media and Nukes at SXSW 2024

May 9, 2024 (58:00)
By: Future of Life Institute

FLI’s Anthony Aguirre speaking on the panel ‘From Algorithms to Arms: Understanding the Interplay of Al, Social Media and Nukes’ at South By Southwest (SXSW) on March 9th 2024.

See here for event details: https://schedule.sxsw.com/2024/events

Featuring: Anthony Aguirre. Executive Director, Future of Life Institute.

Frances Haugen. Beyond the Screen, former Facebook Product Manager.

Jeffrey Ladish. Center For Humane Technology, Head of AI Insights.

Emily Schwartz. Communications Partner, Bryson Gillette

Policy & Research

We aim to improve the governance of artificial intelligence, and its intersection with biological, nuclear and cyberrisk.

Introduction

Source: Website

Improving the governance of transformative technologies

The policy team at FLI works to improve national and international governance of AI. FLI has spearheaded numerous efforts to this end.

In 2017 we created the influential Asilomar AI principles, a set of governance principles signed by thousands of leading minds in AI research and industry. More recently, our 2023 open letter caused a global debate on the rightful place of AI in our societies. FLI has given testimony at the U.S. Congress, the European Parliament, and other key jurisdictions.

Spotlight

We must not build AI to replace humans.

A new essay by Anthony Aguirre, Executive Director of the Future of Life Institute

Humanity is on the brink of developing artificial general intelligence that exceeds our own. It’s time to close the gates on AGI and superintelligence… before we lose control of our future.
View the site

Project database

Recommendations for the U.S. AI Action Plan
The Future of Life Institute proposal for President Trump’s AI Action Plan. Our recommendations aim to protect the presidency from AI loss-of-control, promote the development of AI systems free from ideological or social agendas, protect American workers from job loss and replacement, and more.

Perspectives of Traditional Religions on Positive AI Futures
Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

FLI AI Safety Index 2024
Seven AI and governance experts evaluate the safety practices of six leading general-purpose AI companies.

AI Convergence: Risks at the Intersection of AI and Nuclear, Biological and Cyber Threats
The dual-use nature of AI systems can amplify the dual-use nature of other technologies—this is known as AI convergence. We provide policy expertise to policymakers in the United States in three key convergence areas: biological, nuclear, and cyber.

AI Safety Summits
Governments are exploring collaboration on navigating a world with advanced AI. FLI provides them with advice and support.

Engaging with AI Executive Orders
We provide formal input to agencies across the US federal government, including technical and policy expertise on a wide range of issues such as export controls, hardware governance, standard setting, procurement, and more

Implementing the European AI Act
Our key recommendations include broadening the Act’s scope to regulate general purpose systems and extending the definition of prohibited manipulation to include any type of manipulatory technique, and manipulation that causes societal harm.

Educating about Autonomous Weapons
Military AI applications are rapidly expanding. We develop educational materials about how certain narrow classes of AI-powered weapons can harm national security and destabilize civilization, notably weapons where kill decisions are fully delegated to algorithms.

Latest policy and research papers

Staffer’s Guide to AI Policy: Congressional Committees and Relevant Legislation
March 2025

Recommendations for the U.S. AI Action Plan
March 2025

Safety Standards Delivering Controllable and Beneficial AI Tools
February 2025

Framework for Responsible Use of AI in the Nuclear Domain
February 2025

Futures Program

The Futures program aims to guide humanity towards the beneficial outcomes made possible by transformative technologies.

Guiding humanity towards beneficial outcomes
The Futures program aims to guide humanity towards the beneficial outcomes made possible by transformative technologies. By employing tools like storytelling, worldbuilding, scenario planning, research, and forecasting, we explore possible futures and determine actionable pathways. We then pinpoint the essential policies, decisions, and institutions needed to navigate these paths, and how best to deliver them. The program seeks to engage a diverse group of stakeholders from different professions, communities, and regions to shape our shared future together.

Futures Projects

Source: Website

Perspectives of Traditional Religions on Positive AI Futures
Most of the global population participates in a traditional religion. Yet the perspectives of these religions are largely absent from strategic AI discussions. This initiative aims to support religious groups to voice their faith-specific concerns and hopes for a world with AI, and work with them to resist the harms and realise the benefits.

The Elders Letter on Existential Threats
The Elders, the Future of Life Institute and a diverse range of preeminent public figures are calling on world leaders to urgently address the ongoing harms and escalating risks of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.

Realising Aspirational Futures – New FLI Grants Opportunities
We are opening two new funding opportunities to support research into the ways that artificial intelligence can be harnessed safely to make the world a better place.

The Windfall Trust
The Windfall Trust aims to alleviate the economic impact of AI-driven joblessness by building a global, universally accessible social safety net.

Imagine A World Podcast
Can you imagine a world in 2045 where we manage to avoid the climate crisis, major wars, and the potential harms of artificial intelligence? Our new podcast series explores ways we could build a more positive future, and offers thought provoking ideas for how we might get there.

Worldbuilding Competition
The Future of Life Institute accepted entries from teams across the globe, to compete for a prize purse of up to $100,000 by designing visions of a plausible, aspirational future that includes strong artificial intelligence.

 

More Information

Outreach

Source: Website

Informing the discourse around our focus areas.
The outreach team works to improve awareness, provide accurate and accessible information, and deepen understanding of our focus areas. Using evidence-based strategies of risk communication, we seek to emphasise positive steps by which extreme risks from transformative technologies can be reduced, and global prospects enhanced.

To these ends, we create informative content, operate dynamic social media accounts, collaborate with journalists, and run the Future of Life Award, celebrating unsung heroic individual efforts which made our world a better, safer place.

Grants

Source: Website

Supporting vital cutting-edge work with a wise, future-oriented mindset.

Financial support for promising work aligned with our mission.
Crises like COVID-19 show us that our civilisation is fragile, and needs to plan ahead better. FLI’s grants are for those who take this fragility seriously, who wish to study the risks from ever more powerful technologies and develop strategies for reducing them. The goal is to win the wisdom race: the race between the growing power of our technology and the wisdom with which we manage it.

One such grant program is the Multistakeholder Engagement for Safe and Prosperous AI.

FLI is launching new grants to educate and engage stakeholder groups, as well as the general public, in the movement for safe, secure and beneficial AI.

Status: Closed for submissions
Deadline: 4 February 2025, 23:59 EST

Events

Source: Website

We convene leaders of the relevant fields to discuss ways of ensuring the safe development and use of powerful technologies.

Convening the leading figures in our focus areas to spark discussion
In the past decade, FLI has hosted a wide range of events, conferences and workshops. These events have sparked a myriad of important outcomes, including the Asilomar Principles, FLI’s multi-million-dollar grant programs, and a large number of important connections and conversations.

Over the years, FLI has displayed a unique ability to convene many of the leading figures in fields relating to our focus areas. This convening power is essential for fostering a collaborative and inclusive problem-solving environment.

Wikipedia

42°22′25″N 71°06′35″W / 42.3736158°N 71.1097335°W / 42.3736158; -71.1097335

The Future of Life Institute (FLI) is a nonprofit organization which aims to steer transformative technology towards benefiting life and away from large-scale risks, with a focus on existential risk from advanced artificial intelligence (AI). FLI’s work includes grantmaking, educational outreach, and advocacy within the United Nations, United States government, and European Union institutions.

The founders of the Institute include MIT cosmologist Max Tegmark, UCSC cosmologist Anthony Aguirre, and Skype co-founder Jaan Tallinn; among the Institute’s advisors is entrepreneur Elon Musk.

Purpose

Max Tegmark, professor at MIT, one of the founders and current president of the Future of Life Institute

FLI’s stated mission is to steer transformative technology towards benefiting life and away from large-scale risks.[2] FLI’s philosophy focuses on the potential risk to humanity from the development of human-level or superintelligent artificial general intelligence (AGI), but also works to mitigate risk from biotechnology, nuclear weapons and global warming.[3]

History

FLI was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, DeepMind research scientist Viktoriya Krakovna, Tufts University postdoctoral scholar Meia Chita-Tegmark, and UCSC physicist Anthony Aguirre. The Institute’s advisors include computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, astrophysicist Sandra Faber, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman (as well as cosmologist Stephen Hawking prior to his death in 2018).[4][5][6]

Starting in 2017, FLI has offered an annual “Future of Life Award”, with the first awardee being Vasili Arkhipov. The same year, FLI released Slaughterbots, a short arms-control advocacy film. FLI released a sequel in 2021.[7]

In 2018, FLI drafted a letter calling for “laws against lethal autonomous weapons”. Signatories included Elon Musk, Demis Hassabis, Shane Legg, and Mustafa Suleyman.[8]

In January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper.[9][10] In response, Tegmark said that the institute had only become aware of Nya Dagbladet‘s positions during due diligence processes a few months after the grant was initially offered, and that the grant had been immediately revoked.[10]

Open letter on an AI pause

In March 2023, FLI published a letter titled “Pause Giant AI Experiments: An Open Letter“. This called on major AI developers to agree on a verifiable six-month pause of any systems “more powerful than GPT-4” and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter said: “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict, or reliably control”.[11] The letter referred to the possibility of “a profound change in the history of life on Earth” as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control.[12][13]

Prominent signatories of the letter included Elon Musk, Steve Wozniak, Evan Sharp, Chris Larsen, and Gary Marcus; AI lab CEOs Connor Leahy and Emad Mostaque; politician Andrew Yang; deep-learning researcher Yoshua Bengio; and Yuval Noah Harari.[14] Marcus stated “the letter isn’t perfect, but the spirit is right.” Mostaque stated, “I don’t think a six month pause is the best idea or agree with everything but there are some interesting things in that letter.” In contrast, Bengio explicitly endorsed the six-month pause in a press conference.[15][16] Musk predicted that “Leading AGI developers will not heed this warning, but at least it was said.”[17] Some signatories, including Musk, said they were motivated by fears of existential risk from artificial general intelligence.[18] Some of the other signatories, such as Marcus, instead said they signed out of concern about risks such as AI-generated propaganda.[19]

The authors of one of the papers cited in FLI’s letter, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?[20] including Emily M. Bender, Timnit Gebru, and Margaret Mitchell, criticised the letter.[21] Mitchell said that “by treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of FLI. Ignoring active harms right now is a privilege that some of us don’t have.”[21]

Operations

Advocacy

FLI has actively contributed to policymaking on AI. In October 2023, for example, U.S. Senate majority leader Chuck Schumer invited FLI to share its perspective on AI regulation with selected senators.[22] In Europe, FLI successfully advocated for the inclusion of more general AI systems, such as GPT-4, in the EU’s Artificial Intelligence Act.[23]

Future of Life Institute placard at the United Nations
FLI at the United Nations, Geneva HQ, 2021. On autonomous weapons.

In military policy, FLI coordinated the support of the scientific community for the Treaty on the Prohibition of Nuclear Weapons.[24] At the UN and elsewhere, the Institute has also advocated for a treaty on autonomous weapons.[25][26]

Research grants

The FLI research program started in 2015 with an initial donation of $10 million from Elon Musk.[27][28][29] In this initial round, a total of $7 million was awarded to 37 research projects.[30] In July 2021, FLI announced that it would launch a new $25 million grant program with funding from the Russian–Canadian programmer Vitalik Buterin.[31]

Conferences

In 2014, the Future of Life Institute held its opening event at MIT: a panel discussion on “The Future of Technology: Benefits and Risks”, moderated by Alan Alda.[32][33] The panelists were synthetic biologist George Church, geneticist Ting Wu, economist Andrew McAfee, physicist and Nobel laureate Frank Wilczek and Skype co-founder Jaan Tallinn.[34][35]

Since 2015, FLI has organised biannual conferences with the stated purpose of bringing together AI researchers from academia and industry. As of April 2023, the following conferences have taken place:

  • “The Future of AI: Opportunities and Challenges” conference in Puerto Rico (2015). The stated goal was to identify promising research directions that could help maximize the future benefits of AI.[36] At the conference, FLI circulated an open letter on AI safety which was subsequently signed by Stephen Hawking, Elon Musk, and many artificial intelligence researchers.[37]
  • The Beneficial AI conference in Asilomar, California (2017),[38] a private gathering of what The New York Times called “heavy hitters of A.I.” (including Yann LeCun, Elon Musk, and Nick Bostrom).[39] The institute released a set of principles for responsible AI development that came out of the discussion at the conference, signed by Yoshua Bengio, Yann LeCun, and many other AI researchers.[40] These principles may have influenced the regulation of artificial intelligence and subsequent initiatives, such as the OECD Principles on Artificial Intelligence.[41]
  • The beneficial AGI conference in Puerto Rico (2019).[42] The stated focus of the meeting was answering long-term questions with the goal of ensuring that artificial general intelligence is beneficial to humanity.[43]

In the media

See also

References

  1. ^ “Future of Life Institute received $665 million”. Philanthropy News Digest. Retrieved 13 December 2024.
  2. ^ “Future of Life Institute homepage”. Future of Life Institute. 9 September 2021. Archived from the original on 8 September 2021. Retrieved 9 September 2021.
  3. ^ Chen, Angela (11 September 2014). “Is Artificial Intelligence a Threat?”. Chronicle of Higher Education. Archived from the original on 22 December 2016. Retrieved 18 Sep 2014.
  4. ^ “But What Would the End of Humanity Mean for Me?”. The Atlantic. 9 May 2014. Archived from the original on 4 June 2014. Retrieved 13 April 2020.
  5. ^ “Who we are”. Future of Life Institute. Archived from the original on 6 April 2020. Retrieved 13 April 2020.
  6. ^ “Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world”. Salon. 5 October 2014. Archived from the original on 18 March 2021. Retrieved 13 April 2020.
  7. ^ Walsh, Bryan (20 October 2022). “The physicist Max Tegmark works to ensure that life has a future”. Vox. Archived from the original on 31 March 2023. Retrieved 31 March 2023.
  8. ^ “AI Innovators Take Pledge Against Autonomous Killer Weapons”. NPR. 2018. Archived from the original on 31 March 2023. Retrieved 31 March 2023.
  9. ^ Dalsbro, Anders; Leman, Jonathan (2023-01-13). “Elon Musk-funded nonprofit run by MIT professor offered to finance Swedish pro-nazi group”. Expo. Archived from the original on 2023-06-25. Retrieved 2023-08-17.
  10. ^ a b Hume, Tim (2023-01-19). “Elon Musk-Backed Non-Profit Offered $100K Grant to ‘Pro-Nazi’ Media Outlet”. Vice. Archived from the original on 2023-06-23. Retrieved 2023-08-17.
  11. ^ “Elon Musk among experts urging a halt to AI training”. BBC News. 2023-03-29. Archived from the original on 2023-04-01. Retrieved 2023-04-01.
  12. ^ “Elon Musk and other tech leaders call for pause in ‘out of control’ AI race”. CNN. 29 March 2023. Archived from the original on 10 April 2023. Retrieved 30 March 2023.
  13. ^ “Pause Giant AI Experiments: An Open Letter”. Future of Life Institute. Archived from the original on 27 March 2023. Retrieved 30 March 2023.
  14. ^ Ball, James (2023-04-02). “We’re in an AI race, banning it would be foolish”. The Sunday Times. Archived from the original on 2023-08-19. Retrieved 2023-04-02.
  15. ^ “Musk and Wozniak among 1,100+ signing open letter calling for 6-month ban on creating powerful A.I.” Fortune. March 2023. Archived from the original on 29 March 2023. Retrieved 30 March 2023.
  16. ^ “The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess”. www.vice.com. March 2023. Archived from the original on 30 March 2023. Retrieved 30 March 2023.
  17. ^ “Elon Musk”. Twitter. Archived from the original on 30 March 2023. Retrieved 30 March 2023.
  18. ^ Rosenberg, Scott (30 March 2023). “Open letter sparks debate over “pausing” AI research over risks”. Axios. Archived from the original on 31 March 2023. Retrieved 31 March 2023.
  19. ^ “Tech leaders urge a pause in the ‘out-of-control’ artificial intelligence race”. NPR. 2023. Archived from the original on 29 March 2023. Retrieved 30 March 2023.
  20. ^ Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-03). “On the Dangers of Stochastic Parrots: Can Language Models be Too Big?”. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Virtual Event Canada: ACM. pp. 610–623. doi:10.1145/3442188.3445922. ISBN 978-1-4503-8309-7.
  21. ^ a b Kari, Paul (2023-04-01). “Letter signed by Elon Musk demanding AI research pause sparks controversy”. The Guardian. Archived from the original on 2023-04-01. Retrieved 2023-04-01.
  22. ^ Krishan, Nihal (2023-10-26). “Sen. Chuck Schumer’s second AI Insight Forum covers increased R&D funding, immigration challenges and safeguards”. FedScoop. Retrieved 2024-03-16.
  23. ^ “EU artificial intelligence act not ‘futureproof’, experts warn MEPs”. Science|Business. Retrieved 2024-03-16.
  24. ^ Scientists Support a Nuclear Ban, 16 June 2017, retrieved 2024-03-16
  25. ^ “Educating about Lethal Autonomous Weapons”. Future of Life Institute. Retrieved 2024-03-16.
  26. ^ Government of Costa Rica (February 24, 2023). “FLI address” (PDF). Latin American and the Caribbean conference on the social and humanitarian impact of autonomous weapons.
  27. ^ “Elon Musk donates $10M to keep AI beneficial”. Future of Life Institute. 15 January 2015. Archived from the original on 28 February 2018. Retrieved 28 July 2019.
  28. ^ “Elon Musk donates $10M to Artificial Intelligence research”. SlashGear. 15 January 2015. Archived from the original on 7 April 2015. Retrieved 26 April 2015.
  29. ^ “Elon Musk is Donating $10M of his own Money to Artificial Intelligence Research”. Fast Company. 15 January 2015. Archived from the original on 30 October 2015. Retrieved 19 January 2015.
  30. ^ “New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial”. Future of Life Institute. 28 October 2015. Archived from the original on 28 July 2019. Retrieved 28 July 2019.
  31. ^ “FLI announces $25M grants program for existential risk reduction”. Future of Life Institute. 2 July 2021. Archived from the original on 9 September 2021. Retrieved 9 September 2021.
  32. ^ “The Future of Technology: Benefits and Risks”. Future of Life Institute. 24 May 2014. Archived from the original on 28 July 2019. Retrieved 28 July 2019.
  33. ^ “Machine Intelligence Research Institute – June 2014 Newsletter”. 2 June 2014. Archived from the original on 3 July 2014. Retrieved 19 June 2014.
  34. ^ “FHI News: ‘Future of Life Institute hosts opening event at MIT’. Future of Humanity Institute. 20 May 2014. Archived from the original on 27 July 2014. Retrieved 19 June 2014.
  35. ^ “The Future of Technology: Benefits and Risks”. Personal Genetics Education Project. 9 May 2014. Archived from the original on 22 December 2015. Retrieved 19 June 2014.
  36. ^ “AI safety conference in Puerto Rico”. Future of Life Institute. Archived from the original on 7 November 2015. Retrieved 19 January 2015.
  37. ^ “Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter”. Future of Life Institute. Archived from the original on 2019-08-10. Retrieved 2019-07-28.
  38. ^ “Beneficial AI 2017”. Future of Life Institute. Archived from the original on 2020-02-24. Retrieved 2019-07-28.
  39. ^ Metz, Cade (June 9, 2018). “Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots”. NYT. Archived from the original on February 15, 2021. Retrieved June 10, 2018. The private gathering at the Asilomar Hotel was organized by the Future of Life Institute, a think tank built to discuss the existential risks of A.I. and other technologies.
  40. ^ “Asilomar AI Principles”. Future of Life Institute. Archived from the original on 2017-12-11. Retrieved 2019-07-28.
  41. ^ “Asilomar Principles” (PDF). OECD. Archived (PDF) from the original on 2021-09-09. Retrieved 2021-09-09.
  42. ^ “Beneficial AGI 2019”. Future of Life Institute. Archived from the original on 2019-07-28. Retrieved 2019-07-28.
  43. ^ “CSER at the Beneficial AGI 2019 Conference”. Center for the Study of Existential Risk. Archived from the original on 2019-07-28. Retrieved 2019-07-28.


    Discuss

    OnAir membership is required. The lead Moderator for the discussions is Scott Joy. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

    This is an open discussion on the contents of this post.

    Home Forums Open Discussion

    Viewing 1 post (of 1 total)
    • Author
      Posts
    • #7114
      Scott Joy
      Participant
    Viewing 1 post (of 1 total)
    • You must be logged in to reply to this topic.
    Skip to toolbar