Summary
Luiza Jarovsky, co-founder of the AI, Tech & Privacy Academy, is one of the world’s most influential voices in AI governance. Her upskilling programs empower the next generation of AI governance leaders, with over 1,100 professionals trained.
Her weekly newsletter, with 55,000+ subscribers, is a leading AI governance publication, shaping the future of AI policy, compliance, and regulation.
Source: Luiza’s Newsletter
Luiza Jarovsky – 16/03/2025 (57:43)
The global AI race is at full speed, and many essential questions remain unanswered: what will the implications of this race be for AI regulation and governance? If you are interested in AI, you CAN’T MISS my conversation with Anu Bradford; here’s why: Anu Bradford is a professor of law and international organizations at Columbia University and a leading scholar on global economy and digital regulation. She coined the term ‘Brussels Effect’ – often discussed in the context of AI regulation – and published a book with the same name. More recently, she published the book “Digital Empires: The Global Battle to Regulate Technology,” where she explores the global battle among the three dominant digital powers―the U.S., China, and the EU―and the choices we face as societies and individuals. ➡️ In this live talk, we’ll discuss:
- How the U.S., the EU, and China are strategically positioning themselves in the global AI race;
- How the three dominant digital powers are approaching AI regulation and the practical implications of each approach;
- The Brussels effect in the context of AI regulation;
- and more.
OnAir Post: Luiza Jarovsky
News
AI Is Dehumanizing the Internet
The rise of AI has brought massive changes to the internet, a slow process that began at least 20 years ago.
Recent developments show that this transformation is accelerating and will likely lead to the full dehumanization of the internet, leaving us disempowered, easily manipulable, and entirely dependent on companies that provide AI services.
In this edition of the newsletter, I explain the AI-powered dehumanization process, how it impacts us, and some of its ethical and legal implications.
Luiza’s Newsletter, – March 9, 2025
While the comparison with DeepSeek might make sense from marketing and geopolitical standpoints, it is important to remember that these are two different AI applications with different strategies and functionalities:
- DeepSeek-R1 is an open-source, general-purpose AI model designed to rival OpenAI’s o1;
- Manus AI is a general AI agent currently in closed beta testing. It requires an invitation to access and is not entirely open-source;
- Manus AI hasn’t triggered a significant stock drop like the one Nvidia saw after DeepSeek.
AI Policy, Compliance & Regulation
In 2024, with the enactment of the EU AI Act, the release of numerous AI governance frameworks, and the launch of new professional certifications, we saw a surge in AI governance professionals entering the workforce, particularly in the U.S. and Europe.
But what exactly is an AI governance professional, and what kinds of jobs could fit this definition?
Many people assume that being an AI governance professional requires a legal degree. This is a misconception. AI governance, from a professional perspective, is an umbrella term encompassing various fields, skills, and areas of expertise.
According to the EU AI Act, the human responsible for oversight measures must be able to understand how the AI system operates and interpret its outputs, intervening when necessary to prevent harm to fundamental rights.
But if AI systems are highly complex and function like black box—operating in an opaque manner—how are humans supposed to have a detailed comprehension of their functioning and reasoning to oversee them properly?
If we accept that humans often won’t fully grasp an AI system’s decision-making, can they decide whether harm to fundamental rights has occurred? And if not, can human oversight truly be effective?
Emerging AI Governance Challenges | Paid Subscriber Edition | #173
This week, Microsoft announced Majorana 1, a quantum chip powered by a new “topological core architecture.” According to Microsoft, this quantum breakthrough will help solve industrial-scale problems in just a few years rather than decades.
From a more technical perspective, the topoconductor (or topological superconductor) is a special category of material that creates a new state of matter: it’s neither solid, liquid, nor gas, but a “topological state.”
(*I highly recommend watching this 12-minute video released by Microsoft to learn more about the science behind it. If you have science-loving kids at home, make sure to watch it with them!)
For those interested in diving deeper into the technical details of Microsoft’s latest announcement, the researchers involved have also published a paper in Nature and a “roadmap to fault-tolerant quantum computation using topological qubit arrays,” which can be found here.
About
Source: Website
In 2023, LinkedIn named Luiza a Top Voice in AI. In 2021, she received a Westin Scholar Award from the IAPP, and in 2020, she was honored with the “President’s Scholarship for Excellence in Science and Innovation” by the President of Israel for her Ph.D. research.
From 2016 to 2020, she was part of a European Commission research project on privacy and data protection. Previously, she worked as a startup lawyer, authored three books, and served as the editor of a law book for entrepreneurs.
Luiza holds a Law degree from the University of São Paulo and a Master’s degree from Tel Aviv University, where she is pursuing her Ph.D. She speaks English, Portuguese, French, Spanish, Italian, German, and Hebrew and is a proud mother of three.
Web Links
Videos
Insights on AI Governance, Compliance, and Regulation, with Barry Scannell
July 25, 2024 (01:07:00)
By: Luiza Jarovsky
If you are interested in AI, you CAN’T MISS my 1-hour conversation with Barry Scannell.
Topics we covered:
➵ What are some of the unspoken challenges behind the EU AI Act?
➵ What AI compliance issues should companies be prioritizing at this point?
➵ Many startups and entrepreneurs are currently lost regarding AI compliance and don’t know where to start. What do you recommend?
➵ Emotion recognition and the AI Act: insights and nuances;
➵ Regulating deepfakes: challenges;
➵ Are you optimistic about the AI Act? What do you see as some of the first challenges AI regulators will face?
➵ Regarding AI technology in general: how do you feel about it? Are you a heavy user? Are you optimistic about potential beneficial uses, or are you more skeptical/worried?
➵ At William Fry, if you were hiring a lawyer specialized in AI, what skills would you be looking for? In your view, what essential skills aspiring AI governance professionals should aim for?