Mark Brakel

Summary

Mark Brakel is Future of Life’s Director of Policy, leading our advocacy and policy efforts globally. Previously, Mark was FLI’s regional director for European policy, and served in the Dutch diplomatic service where he was posted to The Netherlands’ Embassy in Iraq.

Mark also authors Not Another Big Tech Stack, a monthly perspective on AI policy (unaffiliated with any of the major AI companies).

Mark holds a bachelor’s degree in Philosophy, Politics and Economics from the University of Oxford, and master’s degree from the Johns Hopkins’ School of Advanced International Studies (SAIS). He speaks Dutch, English, Arabic and a decent amount of French.

Source: Website

OnAir Post: Mark Brakel

News

Where VP Vance and VDL agree
Not Another Big Tech Stack, Mark BrakelFebruary 23, 2025

Despite strong disagreement, scope remains for shared understandings on AI issues

At the recent Paris AI Summit, US Vice President J.D. Vance declared that the “Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens’ right to free speech”. It would have been hard to imagine European Commission President von der Leyen – also in attendance in France – adopting a similar tone.

Whatever you think of her Commission’s AI Act as a whole, however, it directly tackles the concern of AI-powered manipulation that featured centrally in Vance’s speech. This overlap shows that there remains much scope for international convergence around some on the most important questions in AI governance. New research also makes these manipulation guardrails more urgent.

The holy or not so holy grail … of open source AI
Not Another Big Tech Stack, Mark BrakelDecember 8, 2024

One of my favourite podcasts is ‘Leading’ by former Downing Street Director of Communications Alistair Campbell and UK Cabinet Minister Rory Stewart. A couple of months ago, Meta’s policy lead, Nick Clegg, appeared on the show. Wrapping up the episode, Rory notes how he has just received a “lovely email” from Nick suggesting that they sit down to have lunch and discuss “why open sourcing AI is a good idea”. Rory then goes on to say that he “thinks it’s pretty terrifying”. In this month’s newsletter, I dive into the AI open source debate and scrutinise some of the key arguments.

Where do we go from here?
The vast majority of AI systems pose no severe risk of harm and can thus be open sourced without any issue. At the same time, we may want to be more careful about open sourcing the most advanced AI systems before we have some safety guarantees. Last year, the Center for the Governance of AI proposed some pragmatic ways in which many open source benefits can be preserved while still addressing the risks. For example, GovAI encouraged “staged” model release – gradually adding capabilities so that a developer can monitor the impact and cancel a full release if required. They also suggested increased access for model auditors and researchers (rather than anyone and everyone) and more democratic oversight of those who develop the most advanced AI models.

GovAI’s proposals can provide some inspiration to policymakers. Long before any tentative government action, however, AI developers themselves can and should think twice about what they release. During the early stages of nuclear technology – a technology with a transformational potential comparable to AI today – scientists voluntarily censored themselves long before this practice became officially sanctioned. Until one can be certain that an AI model is safe for open source release, a similar approach seems wise.

About

Comments

Hello, my name is Mark Brakel. I currently work as Director of Policy at the Future of Life Institute (FLI), where I lead the team advising governments on how to navigate increasingly advanced artificial intelligence.

Over the course of my work, I have addressed the expert body working on autonomous weapons at the UN in Geneva, the European Parliament, and the Defence Committee of the German parliament. I’ve also spoken about FLI’s work to Politico, Fox News, BBC World, German Der Spiegel and the Dutch NRC Handelsblad, among others.

Before joining FLI, I (unsuccessfully) ran for parliament and worked as a diplomat. I spent time in both The Hague and at the Embassy in Iraq, where I helped establish an entrepreneurship program for young Iraqis.

 

I am a former Fulbright Fellow and hold a degree in Philosophy, Politics and Economics from the University of Oxford, and a master’s from the School of Advanced International Studies at Johns Hopkins University.

Source: From website

Contact

Email: Subscribe to Newsletter, Personal

Web Links

Videos

Mark Brakel on the UK AI Summit and the Future of AI Policy

November 17, 2023 (01:48:00)
By: Future of Life Institute

Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems.

Timestamps:

00:00 AI Safety Summit in the UK

12:18 Are officials up to date on AI?

23:22 Objections to AI policy 31:27 The EU AI Act

43:37 The right level of regulation

57:11 Risks and regulatory tools

1:04:44 Open-source AI

1:14:56 Subsidising AI safety research 1:26:29 Global institutions for safe AI

1:34:34 Autonomy in weapon systems

Not Another Big Tech Stack

Overview

Source: Substack

Working in AI policy, I have learnt a massive amount through the blogs, newsletters and writings from various commentators. Most of today’s leading contributors, however, are affiliated with one of the major corporations.

As investments in AI boom, competition intensifies and the financial stakes become ever higher, however, talk of policy measures that hurt the bottom line has quickly disappeared. I’ve therefore started this Substack to add a civil society perspective.

Discuss

OnAir membership is required. The lead Moderator for the discussions is AGI Policy. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar