Summary
From Martin LaMonica, former technology journalist and science editor for The Conversation, currently Director of Editorial Projects and Newsletters:
Dear reader,
We at The Conversation are keen to know what questions you have about AI and types of stories you want to read.
To tell us, please fill out this very short questionnaire. I’ll share your responses (no names or emails will be attached) with the editors to help guide our coverage going forward.
The Conversation AI is different than most newsletters on artificial intelligence. We will, of course, cover how the technology is evolving and its many applications.
But our editors and expert authors do more – they look broadly at the impact this powerful technology is having on society, whether it’s new ethical and regulatory questions, or changes to the workplace. Also, our academic writers approach this subject from a variety of disciplines and from universities around the world, bringing you a global perspective on this hot issue.
OnAir Post: The Conversation AI
News
The Conversation AI, – March 11, 2025
The EU has a long-established reputation as a global standard setter, and as a reliable partner for international regulatory cooperation, especially in the digital field. But the second Trump administration is disrupting these dynamics.
In the last decade, several US big tech companies were scrutinised and sanctioned by EU data protection watchdogs for abusing customers’ personal data. Meanwhile, other nations have adopted digital regulations that are modelled on the EU’s GDPR. They reason that doing so will enhance privacy protections domestically while also strengthening their economic presence in the EU. The list of these countries keeps increasing, and includes countries traditionally operating on a protectionist agenda, such as China and Brazil.
The same had been true for artificial intelligence. Regulations on the development and use of AI drawn up under the presidency of Joe Biden signalled a degree of alignment with Brussels. The EU’s approach focuses on managing the risks stemming from AI – a goal that appeared to be seriously embraced by the US, too.
But shortly after arriving in office in January, Trump signed several executive orders “removing barriers to American leadership in artificial intelligence”. The Trump administration’s stated aim is to “achieve and maintain unquestioned and unchallenged global technological dominance”. This includes a new stance on AI that concentrates exclusively on economic and competitiveness arguments. Concerns around the risks of that technology, which the EU framework puts at its core, are no longer even part of the conversation in the US.
Trump has also launched an investigation into the EU’s Digital Markets Act (DMA) and Digital Services Act (DSA) as part of a wider exercise to see if “remedial actions” (for which, read tariffs) are needed in response to the taxes and regulations levied at US tech companies. EU acts seek to combat concentrations and abuses of digital power and the risks of social media platforms. The US is flexing its muscles, while the EU is exposed to a form of regulatory blackmail.
These are but a few examples of the new US government’s remarkably deregulatory approach concerning digital issues, despite the increasing global consensus around the risks and perils in this field.
The fallout
The geopolitics of digital regulation may push the EU towards an under-enforcement of its own digital rules so that it can continue to rely on US tech companies and avoid tariffs. The recent US executive orders may cause a chilling effect on the enforcement of the DMA and the DSA, or a potential lax application of the EU AI Act that requires developers of AI systems to respect a series of standards for their products to be lawfully marketed in the EU. Worryingly, some weeks ago the EU withdrew the proposed EU directive on AI liability, which introduced rules on how people could claim compensation for damages caused by AI systems.
Handing unfettered power to privately owned digital companies sits uneasily both with the European tradition of antitrust rules and consumer protection, as well as the values of EU constitutionalism that emerged in the aftermath of the second world war. The conquests of democracy and its values could be significantly eroded in a digital world that is becoming increasingly unequal. What is more, capitulation in the face of regulatory blackmail would equate to a relinquishment of global influence for the EU. The EU regulatory tradition and role as international standard-setter would be undermined were the EU to give in to US pressure.
Regardless of legal traditions and democratic values, any regulator should put people first when drawing up the rules that will govern the digital space – not the interests of a handful of tech companies. Jurisdictions that do not pursue policies ensuring a safe digital world for ordinary people are effectively declaring where their interests reside – not with the many but in the power and wealth of the few.
The Conversation AI, – March 6, 2025
Striking a balance
The increasing use of AI in all aspects of people’s lives raises a new set of questions to which history has few answers. At the same time, the urgency to address how it should be governed is growing. Policymakers appear to be paralyzed, debating whether to let innovation flourish without controls or risk slowing progress. However, I believe that the binary choice between regulation and innovation is a false one.
Instead, it’s possible to chart a different approach that can help guide innovation in a direction that adheres to existing laws and societal norms without stifling creativity, competition and entrepreneurship.
The U.S. has consistently demonstrated its ability to drive economic growth. The American tech innovation system is rooted in entrepreneurial spirit, public and private investment, an open market and legal protections for intellectual property and trade secrets. From the early days of the Industrial Revolution to the rise of the internet and modern digital technologies, the U.S. has maintained its leadership by balancing economic incentives with strategic policy interventions.
In January 2025, President Donald Trump issued an executive order calling for the development of an AI action plan for America. My team and I have developed an AI governance model that can underpin an action plan.
A new governance model
Previous presidential administrations have waded into AI governance, including the Biden administration’s since-recinded executive order. There has also been an increasing number of regulations concerning AI passed at the state level. But the U.S. has mostly avoided imposing regulations on AI. This hands-off approach stems in part from a disconnect between Congress and industry, with each doubting the other’s understanding of the technologies requiring governance.
The industry is divided into distinct camps, with smaller companies allowing tech giants to lead governance discussions. Other contributing factors include ideological resistance to regulation, geopolitical concerns and insufficient coalition-building that have marked past technology policymaking efforts. Yet, our study showed that both parties in Congress favor a uniquely American approach to governance.
Congress agrees on extending American leadership, addressing AI’s infrastructure needs and focusing on specific uses of the technology – instead of trying to regulate the technology itself. How to do it? My team’s findings led us to develop the Dynamic Governance Model, a policy-agnostic and nonregulatory method that can be applied to different industries and uses of the technology. It starts with a legislative or executive body setting a policy goal and consists of three subsequent steps:
- Establish a public-private partnership in which public and private sector experts work together to identify standards for evaluating the policy goal. This approach combines industry leaders’ technical expertise and innovation focus with policymakers’ agenda of protecting the public interest through oversight and accountability. By integrating these complementary roles, governance can evolve together with technological developments.
- Create an ecosystem for audit and compliance mechanisms. This market-based approach builds on the standards from the previous step and executes technical audits and compliance reviews. Setting voluntary standards and measuring against them is good, but it can fall short without real oversight. Private sector auditing firms can provide oversight so long as those auditors meet fixed ethical and professional standards.
- Set up accountability and liability for AI systems. This step outlines the responsibilities that a company must bear if its products harm people or fail to meet standards. Effective enforcement requires coordinated efforts across institutions. Congress can establish legislative foundations, including liability criteria and sector-specific regulations. It can also create mechanisms for ongoing oversight or rely on existing government agencies for enforcement. Courts will interpret statutes and resolve conflicts, setting precedents. Judicial rulings will clarify ambiguous areas and contribute to a sturdier framework.
Benefits of balance
I believe that this approach offers a balanced path forward, fostering public trust while allowing innovation to thrive. In contrast to conventional regulatory methods that impose blanket restrictions on industry, like the one adopted by the European Union, our model:
- is incremental, integrating learning at each step.
- draws on the existing approaches used in the U.S. for driving public policy, such as competition law, existing regulations and civil litigation.
- can contribute to the development of new laws without imposing excessive burdens on companies.
- draws on past voluntary commitments and industry standards, and encourages trust between the public and private sectors.
The U.S. has long led the world in technological growth and innovation. Pursuing a public-private partnership approach to AI governance should enable policymakers and industry leaders to advance their goals while balancing innovation with transparency and responsibility. We believe that our governance model is aligned with the Trump administration’s goal of removing barriers for industry but also supports the public’s desire for guardrails.