MIT & AI Policy

Summary

MIT’s primary AI Policy efforts are centered in the Internet Policy Research Initiative (IPRI) and the MIT Schwarzman College of Computing AI Policy Forum and the new Generative AI Impact Consortium.

Source: Website

OnAir Post: MIT & AI Policy

About

IPRI Overview

We are driven by the belief that the digital landscape should be secure, inclusive, and accessible to all. IPRI stands at the intersection of technology and policy, dedicated to addressing the complex challenges that arise as the Internet continues to evolve. We strive to bridge the gap between technical innovation and effective policy-making.

Our mission is to advance the understanding and development of internet policy through a holistic, interdisciplinary approach. We bring together experts from diverse fields, including cybersecurity, data privacy, internet governance, and education. By fostering collaboration across these domains, we are able to tackle the pressing issues facing the digital world today and develop forward-thinking solutions that benefit society as a whole.

Source: Website

Expertise

**1. Cybersecurity**:
We recognize the growing threats in the digital space and are committed to enhancing cybersecurity measures that protect users and infrastructure. Our team conducts cutting-edge research to identify vulnerabilities and develop robust strategies to mitigate risks, ensuring the safety and resilience of online systems.

**2. Data Privacy and Protection**:
In an era where data is a critical asset, we advocate for policies that prioritize user privacy. Our work emphasizes the importance of transparency, consent, and ethical data practices. We collaborate with stakeholders to develop frameworks that safeguard personal information while enabling innovation.

**3. Internet Governance**:
The internet is a global resource that requires fair and transparent governance. We actively engage in discussions around internet governance, promoting policies that reflect the principles of openness, equity, and accountability. Our goal is to ensure that the internet remains a space where diverse voices are heard, and all users can thrive.

Source: Website

Web Links

AI Policy Forum

MIT Schwarzman College of Computing

Source: webpage

A global collaboration moving from AI principles to AI practice
The rapidly increasing applicability of artificial intelligence has prompted a number of organizations to work towards understanding the impact of the wide-spread deployment of AI as well as developing high-level principles on social and ethical issues such as privacy, fairness, bias, transparency, and accountability. Now, it is time to take the next step.

Building on these broader principles, the AI Policy Forum (AIPF), a global effort convened by the MIT Schwarzman College of Computing, will work towards formulating concrete guidance for governments and companies to address the emerging challenges. This will involve, on the one hand, selecting the right topics to discuss and the right partners with whom to engage. On the other hand, this will require a sustained focus on understanding the inevitable policy trade-offs that arise in the studied contexts, and identifying the technical tools and the policy levers to bring to bear on them.

The AI Policy Forum operates in yearlong cycles, with each cycle bringing together scientists, technologists, policymakers, and business leaders to examine in-depth a specific set of topics. This process culminates in a capstone event gathering high-level decision makers and is intended to provide a focal point for work to move from AI principles to AI practice, as well as to serve as a springboard to global efforts of designing the future of AI.

Generative AI Impact Consortium

About the Consortium

The consortium will bring researchers and industry together to focus on impact.

Liam McDonnell | Office of Innovation
February 3, 2025
Categories: CollaborationResearch

From crafting complex code to revolutionizing the hiring process, generative artificial intelligence is reshaping industries faster than ever before — pushing the boundaries of creativity, productivity, and collaboration across countless domains.

Enter the MIT Generative AI Impact Consortium, a collaboration between industry leaders and MIT’s top minds. As MIT President Sally Kornbluth highlighted last year, the Institute is poised to address the societal impacts of generative AI through bold collaborations. Building on this momentum and established through MIT’s Generative AI Week and impact papers, the consortium aims to harness AI’s transformative power for societal good, tackling challenges before they shape the future in unintended ways.

“Generative AI and large language models [LLMs] are reshaping everything, with applications stretching across diverse sectors,” says Anantha Chandrakasan, dean of the School of Engineering and MIT’s chief innovation and strategy officer, who leads the consortium. “As we push forward with newer and more efficient models, MIT is committed to guiding their development and impact on the world.”

Chandrakasan adds that the consortium’s vision is rooted in MIT’s core mission. “I am thrilled and honored to help advance one of President Kornbluth’s strategic priorities around artificial intelligence,” he says. “This initiative is uniquely MIT — it thrives on breaking down barriers, bringing together disciplines, and partnering with industry to create real, lasting impact. The collaborations ahead are something we’re truly excited about.”

Developing the blueprint for generative AI’s next leap

The consortium is guided by three pivotal questions, framed by Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and co-chair of the GenAI Dean’s oversight group, that go beyond AI’s technical capabilities and into its potential to transform industries and lives:

  1. How can AI-human collaboration create outcomes that neither could achieve alone?
  2. What is the dynamic between AI systems and human behavior, and how do we maximize the benefits while steering clear of risks?
  3. How can interdisciplinary research guide the development of better, safer AI technologies that improve human life?

Generative AI continues to advance at lightning speed, but its future depends on building a solid foundation. “Everybody recognizes that large language models will transform entire industries, but there’s no strong foundation yet around design principles,” says Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-faculty director of the consortium.

“Now is a perfect time to look at the fundamentals — the building blocks that will make generative AI more effective and safer to use,” adds Kraska.

“What excites me is that this consortium isn’t just academic research for the distant future — we’re working on problems where our timelines align with industry needs, driving meaningful progress in real time,” says Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management, and co-faculty director of the consortium.

A “perfect match” of academia and industry

At the heart of the Generative AI Impact Consortium are six founding members: Analog Devices, The Coca-Cola Co., OpenAI, Tata Group, SK Telecom, and TWG Global. Together, they will work hand-in-hand with MIT researchers to accelerate breakthroughs and address industry-shaping problems.

The consortium taps into MIT’s expertise, working across schools and disciplines — led by MIT’s Office of Innovation and Strategy, in collaboration with the MIT Schwarzman College of Computing and all five of MIT’s schools.

“This initiative is the ideal bridge between academia and industry,” says Chandrakasan. “With companies spanning diverse sectors, the consortium brings together real-world challenges, data, and expertise. MIT researchers will dive into these problems to develop cutting-edge models and applications into these different domains.”

Open

AI Policy

Source: Webpage

IPRI’s AI Policy works focuses on trustworthiness, transparency, security, and privacy related to artificial intelligence (AI) and machine learning (ML) systems.

IPRI researchers are at the forefront of developing tools that allow for secure machine learning over encrypted data. These tools can work well for well-understood tasks where the hyperparameters of the training can be set at the start. For example, taking a public model and then fine-tuning it on proprietary data.

We also focus on autonomous vehicles. If we want users to trust the cars they are in and feel comfortable and willing to relinquish control, autonomous subsystems will have to be able to explain why they took certain actions and show that they can be accountable for errors. Explanations will have to be simple enough for users to understand, even when subject to cognitive distractions. In response to this, our machine understanding work explores techniques for enabling autonomous systems to explain themselves by generating coherent symbolic representations of the relevant antecedents of significant events in the course of driving.

Discuss

OnAir membership is required. The lead Moderator for the discussions is AGI Policy. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

This is an open discussion on the contents of this post.

Home Forums Open Discussion

Viewing 1 post (of 1 total)
Viewing 1 post (of 1 total)
  • You must be logged in to reply to this topic.
Skip to toolbar