Not Another Big Tech Stack
One of my favourite podcasts is ‘Leading’ by former Downing Street Director of Communications Alistair Campbell and UK Cabinet Minister Rory Stewart. A couple of months ago, Meta’s policy lead, Nick Clegg, appeared on the show. Wrapping up the episode, Rory notes how he has just received a “lovely email” from Nick suggesting that they sit down to have lunch and discuss “why open sourcing AI is a good idea”. Rory then goes on to say that he “thinks it’s pretty terrifying”. In this month’s newsletter, I dive into the AI open source debate and scrutinise some of the key arguments.
Where do we go from here?
The vast majority of AI systems pose no severe risk of harm and can thus be open sourced without any issue. At the same time, we may want to be more careful about open sourcing the most advanced AI systems before we have some safety guarantees. Last year, the Center for the Governance of AI proposed some pragmatic ways in which many open source benefits can be preserved while still addressing the risks. For example, GovAI encouraged “staged” model release – gradually adding capabilities so that a developer can monitor the impact and cancel a full release if required. They also suggested increased access for model auditors and researchers (rather than anyone and everyone) and more democratic oversight of those who develop the most advanced AI models.
GovAI’s proposals can provide some inspiration to policymakers. Long before any tentative government action, however, AI developers themselves can and should think twice about what they release. During the early stages of nuclear technology – a technology with a transformational potential comparable to AI today – scientists voluntarily censored themselves long before this practice became officially sanctioned. Until one can be certain that an AI model is safe for open source release, a similar approach seems wise.