Future of Life Institute Media

Summary

In addition to the Future of Life Institute outreach efforts, FLI has a number of media ventures including: FLI Newsletters, FLI Podcasts, and FLI Papers.

The Future of Life Institute is a US-based non-profit focused on steering transformative technology towards benefitting life and away from extreme large-scale risks.

OnAir Post: Future of Life Institute Media

About

Web Links

Newsletters

Monthly Newsletter

Source: Website

A monthly update on transformative technology and extreme risks
Our newsletter brings subscribers the latest news on how emerging technologies are transforming our world – for better and worse.

What you can expect

At the end of every month, our 40,000+ subscribers can expect:

  • Our work – An update on FLI activities and output across our cause areas.
  • Policy developments – Significant governance and policy developments in the US, EU, and international institutions.
  • Latest research – The latest scientific and policy research related to our cause areas.
  • Top resources – A list of articles, reports, and podcasts our researchers are reading.
  • Data insights – Important data and visualisations about our cause areas.

The Autonomous Weapons Newsletter

Source: Website

Stay up-to-date on the autonomous weapons space with a monthly newsletter covering policymaking efforts, weapons systems technology, and more.

The EU AI Act Newsletter

Source: Website

Up-to-date developments and analyses of the proposed EU AI law – by Risto Uuk.

Not Another Big Tech Stack

Source: Website

A monthly perspective on AI policy, unaffiliated with any of the major AI companies. By Mark Brakel.

Podcasts

Conversations with far-sighted thinkers.

Source: Website

Our namesake podcast series features the FLI team in conversation with prominent researchers, policy experts, philosophers, and a range of other influential thinkers.

21 March, 2025
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)

In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research.

00:00 Nicholas Carlini’s contributions to cybersecurity

08:19 Understanding attack strategies

29:39 High-dimensional spaces and attack intuitions

51:00 Challenges in open-source model safety

01:00:11 Unlearning and fact editing in models

01:10:55 Adversarial examples and human robustness

01:37:03 Cryptography and AI robustness

01:55:51 Scaling AI security research

Videos

We Can’t Stop AI – Here’s What To Do Instead [Keep The Future Human]

March 6, 2025 (04:00)
By: https://www.youtube.com/@futureoflifeinstitute

We stand at a pivotal moment – humanity is on the brink of developing artificial minds that could exceed our own. To do so would undermine human control of our destiny. We must keep the future human by closing the “gates” to AGI.

‘Keep The Future Human’, a new essay from Anthony Aguirre (Executive Director of the Future of Life Institute) presents a case for why, and how, we should close the ‘Gates’ to AGI and superintelligence, and what we should build instead.

How might AI be weaponized? | Al, Social Media and Nukes at SXSW 2024

May 9, 2024 (58:00)
By: Future of Life Institute

FLI’s Anthony Aguirre speaking on the panel ‘From Algorithms to Arms: Understanding the Interplay of Al, Social Media and Nukes’ at South By Southwest (SXSW) on March 9th 2024.

See here for event details: https://schedule.sxsw.com/2024/events

Featuring: Anthony Aguirre. Executive Director, Future of Life Institute.

Frances Haugen. Beyond the Screen, former Facebook Product Manager.

Jeffrey Ladish. Center For Humane Technology, Head of AI Insights.

Emily Schwartz. Communications Partner, Bryson Gillette

Discuss

OnAir membership is required. The lead Moderator for the discussions is AGI Policy. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar