News
The Millennium Project is a global participatory think tank established in 1996 under the American Council for the United Nations University. It became an independent non-profit in 2009 and now has 72 Nodes (a group of institutions and individuals that connect local and global perspectives) around the world.
On the first day of the Trump Administration, the White House’s Office of Personnel Management (OPM) issued a memo that suggested federal agencies consider firing so-called probationary employees. Despite the name, this is not a designation for employees who are in some kind of trouble. Instead, it refers to a “probation” period that applies to newly hired career civil servants, employees who have been transferred between agencies, and sometimes even employees who have been promoted into management roles. These employees are much easier to fire than most federal employees, so they were a natural target for the Trump Administration’s cost-cutting initiatives.
Because probationary employees are disproportionately likely to be young and focused on more recent government priorities (like AI), the move had unintended consequences. The Trump Administration has since updated the OPM memo to add a paragraph clarifying that they are not directing agencies to fire probationary staff (the first link in this article is the original memo, if you would like to compare).
While the memo was a disruption for many federal agencies, it would have been an existential threat to the US AI Safety Institute, virtually all of whose staff are probationary employees. The threat did not come to fruition, but the whole affair gave me, and I suspect others in Washington, an opportunity to ponder the future of the US AI Safety Institute (AISI) under the Trump Administration.
GMU (Mason News) – March 3, 2025
At last week’s Board of Visitors meeting, George Mason University’s Vice President and Chief AI Officer Amarda Shehu rolled out a new model for universities to advance a responsible approach to harnessing artificial intelligence (AI) and drive societal impact. George Mason’s model, called AI2Nexus, is building a nexus of collaboration and resources on campus, throughout the region with our vast partnerships, and across the state.
AI2Nexus is based on four key principles: “Integrating AI” to transform education, research, and operations; “Inspiring with AI” to advance higher education and learning for the future workforce; “Innovating with AI” to lead in responsible AI-enabled discovery and advancements across disciplines; and “Impacting with AI” to drive partnerships and community engagement for societal adoption and change.
Shehu said George Mason can harness its own ecosystem of AI teaching, cutting-edge research, partnerships, and incubators for entrepreneurs to establish a virtuous cycle between foundational and user-inspired AI research within ethical frameworks.
As part of this effort, the university’s AI Task Force, established by President Gregory Washington last year, has developed new guidelines to help the university navigate the rapidly evolving landscape of AI technologies, which are available at gmu.edu/ai-guidelines.
Further, Information Technology Services (ITS) will roll out the NebulaONE academic platform equipping every student, staff, and faculty member with access to hundreds of cutting-edge Generative AI models to support access, performance, and data protection at scale.
“We are anticipating that AI integration will allow us to begin to evaluate and automate some routine processes reducing administrative burdens and freeing up resources for mission-critical activities,” added Charmaine Madison, George Mason’s vice president of information services and CIO.
George Mason is already equipping students with AI skills as a leader in developing AI-ready talent ready to compete and new ideas for critical sectors like cybersecurity, public health, and government. In the classroom, the university is developing courses and curriculums to better prepare our students for a rapidly changing world.
In spring 2025, the university launched a cross-disciplinary graduate course, AI: Ethics, Policy, and Society, and in fall 2025, the university is debuting a new undergraduate course open to all students, AI4All: Understanding and Building Artificial Intelligence. A master’s in computer science and machine learning, an Ethics and AI minor for undergraduates of all majors, and a Responsible AI Graduate Certificate are more examples of Mason’s mission to innovate AI education. New academies are also in development, and the goal is to build an infrastructure of more than 100 active core AI and AI-related courses across George Mason’s colleges and programs.
The university will continue to host workshops, conferences, and public forums to shape the discourse on AI ethics and governance while forging deep and meaningful partnerships with industry, government, and community organizations to offer academies to teach and codevelop technologies to meet our global society needs. State Council of Higher Education for Virginia (SCHEV) will partner with the university to host an invite-only George Mason-SCHEV AI in Education Summit on May 20-21 on the Fairfax Campus.
Virginia Governor Glenn Youngkin has appointed Jamil N. Jaffer, the founder and executive director of the National Security Institute (NSI) at George Mason’s Antonin Scalia Law School, to the Commonwealth’s new AI Task Force, which will work with legislators to regulate rapidly advancing AI technology.
Despite strong disagreement, scope remains for shared understandings on AI issues
At the recent Paris AI Summit, US Vice President J.D. Vance declared that the “Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens’ right to free speech”. It would have been hard to imagine European Commission President von der Leyen – also in attendance in France – adopting a similar tone.
Whatever you think of her Commission’s AI Act as a whole, however, it directly tackles the concern of AI-powered manipulation that featured centrally in Vance’s speech. This overlap shows that there remains much scope for international convergence around some on the most important questions in AI governance. New research also makes these manipulation guardrails more urgent.
Look ahead: Our annual AI+ Summits are coming to not one, not two, but three locations this year. We’ll be in New York on June 4, D.C. on Sept. 16, and will round out the year in San Francisco on Dec. 4.
A project that’s spent six years simulating scenarios of AI’s future validates growing alarm among many observers that runaway competition will drive reckless adoption of unsafe technologies.
- These simulations aren’t running on some massive supercomputer in the cloud — they’re powered by people sitting around a table scattered with cards and dice.
Why it matters: Even some of those who believe powerful AI can be developed safely are worried that viewing the technology’s development as a race will push AI makers toward dangerous choices.
State of play: Since 2019, a group of academics has been developing and refining Intelligence Rising, an interactive game that aims to simulate the development of advanced AI, with individual players taking on the roles of government leaders and company executives.
What I do show students is that the future has a way of arriving ahead of schedule. Which brings me to a report I discussed with students yesterday, if one thing became clear from the Threshold 2030 conference and resulting report, it is that artificial intelligence is no longer a distant speculation, but an economic force barreling toward us with the subtlety of a freight train.
Over two days, thirty of the world’s highly informed AI lab researchers, economists, policy experts, UN staff and professional forecasters gathered to map out three potential futures of AI’s economic impact, none of them reassuringly benign. The discussions, rigorous and, it seems, unflinching, painted a picture not just of transformation, but of upheaval.
According to the EU AI Act, the human responsible for oversight measures must be able to understand how the AI system operates and interpret its outputs, intervening when necessary to prevent harm to fundamental rights.
But if AI systems are highly complex and function like black box—operating in an opaque manner—how are humans supposed to have a detailed comprehension of their functioning and reasoning to oversee them properly?
If we accept that humans often won’t fully grasp an AI system’s decision-making, can they decide whether harm to fundamental rights has occurred? And if not, can human oversight truly be effective?
During the SB 1047 debate, I noticed that there was a great deal of confusion—my own included—about liability. Why is it precisely that software seems, for the most part, to evade America’s famously capacious notions of liability? Why does America have such an expansive liability system in the first place? What is “reasonable care,” after all? Is AI, being software, free from liability exposure today unless an intrusive legislator decides to change the status quo (preview: the answer to this one is “no”)? How does liability for AI work today, and how should it work? It turned out that to answer those questions I had to trace the history of American liability from the late 19th century to the present day.
Answering the questions above has been a journey. This week and next, I’d like to tell you what I’ve found so far. This week’s essay will tell the story of how we got to where we are, a story that has fascinating parallels to current discussions about the need for liability in AI. Next week’s essay will deal with how the American liability system, unchecked, could subsume AI, and what I believe should be done.
Futures Digest, – February 26, 2025
Every year on March 1st, World Futures Day (WFD) brings together people from around the globe to engage in a continuous conversation about the future. What began as an experimental open dialogue in 2014 has grown into a cornerstone event for futurists, thought leaders, and citizens interested in envisioning a better tomorrow. WFD 2025 will mark the twelfth edition of the event.
WFD is a 24-hour, round-the-world global conversation about possible futures and represents a new kind of participatory futures method (Di Berardo, 2022). Futures Day on March 1 was proposed by the World Transhumanist Association, now Humanity+, in 2012 to celebrate the future. Two years later, The Millennium Project launched WFD as a 24-hour worldwide conversation for futurists and the public, providing an open space for discussion. In 2021, UNESCO established a WFD on December 2. However, The Millennium Project and its partners continue to observe March 1 due to its historical significance, its positive reception from the futures community, and the value of multiple celebrations in maintaining focus on future-oriented discussions.
Emerging AI Governance Challenges | Paid Subscriber Edition | #173
This week, Microsoft announced Majorana 1, a quantum chip powered by a new “topological core architecture.” According to Microsoft, this quantum breakthrough will help solve industrial-scale problems in just a few years rather than decades.
From a more technical perspective, the topoconductor (or topological superconductor) is a special category of material that creates a new state of matter: it’s neither solid, liquid, nor gas, but a “topological state.”
(*I highly recommend watching this 12-minute video released by Microsoft to learn more about the science behind it. If you have science-loving kids at home, make sure to watch it with them!)
For those interested in diving deeper into the technical details of Microsoft’s latest announcement, the researchers involved have also published a paper in Nature and a “roadmap to fault-tolerant quantum computation using topological qubit arrays,” which can be found here.
The continued need for a light touch
A heavily regulatory approach to AI policy under Trump is not inevitable, yet is concerningly possible given the anti-tech and pro-industrial policy pushed.
Just because the administration criticized European AI regulations does not mean his administration’s approach won’t consider its own problematic regulations of this important technologies. Four years is a long time, and AI policy is still in its formative stages and regulatory intervention could have consequences that change the trajectory or eliminate beneficial uses along with harms.
For those who value freedom, innovation, and global competitiveness, the message is clear: stay vigilant. The regulatory trajectory of AI in the U.S. is far from settled, and the consequences could be profound.
During the SB 1047 debate, I noticed that there was a great deal of confusion—my own included—about liability. Why is it precisely that software seems, for the most part, to evade America’s famously capacious notions of liability? Why does America have such an expansive liability system in the first place? What is “reasonable care,” after all? Is AI, being software, free from liability exposure today unless an intrusive legislator decides to change the status quo (preview: the answer to this one is “no”)? How does liability for AI work today, and how should it work? It turned out that to answer those questions I had to trace the history of American liability from the late 19th century to the present day.
Answering the questions above has been a journey. This week and next, I’d like to tell you what I’ve found so far. This week’s essay will tell the story of how we got to where we are, a story that has fascinating parallels to current discussions about the need for liability in AI. Next week’s essay will deal with how the American liability system, unchecked, could subsume AI, and what I believe should be done.
The Apple Strategy
For now, Apple plans to be a reseller.
It claims a “partnership” with OpenAI but there’s nothing it can’t get out of. Apple is focused on building models that can run directly on its clients, as opposed to larger models that require an online connection.
Given that the large models aren’t getting better fast and continue to hallucinate, even after considerable use, this looks like a sound strategy. Given the size of the phone market, any gains from Android will mean big money, and the Indian manufacturing base could bring such gains in the Middle East and Southeast Asia, where economies are growing and where many countries are at relative peace, such as in Vietnam.
This means that when the GenAI market crashes, as everyone is predicting it will, Apple shouldn’t. It is independent of the madness, and I wonder why more analysts aren’t pointing this out.
OpenAI spotted and disrupted two uses of its AI tools as part of broader Chinese influence campaigns, including one designed to spread Spanish-language anti-American disinformation, the company said.
Why it matters: AI’s potential to supercharge disinformation and speed the work of nation state-backed cyberattacks is steadily moving from scary theory to complex reality.
Driving the news: OpenAI published its latest threat report on Friday, identifying several examples of efforts to misuse ChatGPT and its other tools.