Machine Intelligence Research Institute (MIRI)

Summary

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence.

MIRI’s work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI).  See onAir post.

Source: Wikipedia

OnAir Post: Machine Intelligence Research Institute (MIRI)

About

Overview

The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit based in Berkeley, California. We do research and public outreach intended to help prevent human extinction from the development of artificial superintelligence (ASI).

What we do

Founded more than 20 years ago, MIRI was among the first to recognize the future invention of artificial superintelligence as the most important—and potentially catastrophic—event in the twenty-first century. MIRI was the first organization to advocate for and work on ASI alignment as a technical problem, and has played a central role in building the field over the years.

Unfortunately, our efforts failed to prevent the current emergency. The alignment problem is not on track to be solved before the leading companies succeed in building smarter-than-human AI, and the default outcome is human extinction.

Our priority now is to use the lessons we’ve learned so far to inform the world about the situation and what needs to be done.

Extinction from AI is a live possibility, and the only reasonable response is to stop AI development altogether, until such a time as the alignment problem has been solved.

Source:

Web Links

The Problem

AI Alignment

Source: Website

The stated goal of the world’s leading AI companies is to build AI that is general enough to do anything a human can do, from solving hard problems in theoretical physics to deftly navigating social environments. Recent machine learning progress seems to have brought this goal within reach. At this point, we would be uncomfortable ruling out the possibility that AI more capable than any human is achieved in the next year or two, and we would be moderately surprised if this outcome were still two decades away.

The current view of MIRI’s research scientists is that if smarter-than-human AI is developed this decade, the result will be an unprecedented catastrophe. The CAIS Statement, which was widely endorsed by senior researchers in the field, states:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

We believe that if researchers build superintelligent AI with anything like the field’s current technical understanding or methods, the expected outcome is human extinction.

“Research labs around the world are currently building tech that is likely to cause human extinction” is a conclusion that should motivate a rapid policy response. The fast pace of AI, however, has caught governments and the voting public flat-footed. This document will aim to bring readers up to speed, and outline the kinds of policy steps that might be able to avert catastrophe.

Research

MIRI’s current focus is on attempting to halt the development of increasingly general AI models, via discussions with policymakers about the extreme risks artificial superintelligence poses. Our technical governance research explores the technical questions that bear on these regulatory and policy goals.

What AI evaluations for preventing catastrophic risks can and cannot do

Source: arxiv

AI evaluations are an important component of the AI governance toolkit, underlying current approaches to safety cases for preventing catastrophic risks. Our paper examines what these evaluations can and cannot tell us. Evaluations can establish lower bounds on AI capabilities and assess certain misuse risks given sufficient effort from evaluators.

Unfortunately, evaluations face fundamental limitations that cannot be overcome within the current paradigm. These include an inability to establish upper bounds on capabilities, reliably forecast future model capabilities, or robustly assess risks from autonomous AI systems. This means that while evaluations are valuable tools, we should not rely on them as our main way of ensuring AI systems are safe. We conclude with recommendations for incremental improvements to frontier AI safety, while acknowledging these fundamental limitations remain unsolved.

More Information

Wikipedia

The Machine Intelligence Research Institute (MIRI), formerly the Singularity Institute for Artificial Intelligence (SIAI), is a non-profit research institute focused since 2005 on identifying and managing potential existential risks from artificial general intelligence. MIRI’s work has focused on a friendly AI approach to system design and on predicting the rate of technology development.

History

Yudkowsky at Stanford University in 2006

In 2000, Eliezer Yudkowsky founded the Singularity Institute for Artificial Intelligence with funding from Brian and Sabine Atkins, with the purpose of accelerating the development of artificial intelligence (AI).[1][2][3] However, Yudkowsky began to be concerned that AI systems developed in the future could become superintelligent and pose risks to humanity,[1] and in 2005 the institute moved to Silicon Valley and began to focus on ways to identify and manage those risks, which were at the time largely ignored by scientists in the field.[2]

Starting in 2006, the Institute organized the Singularity Summit to discuss the future of AI including its risks, initially in cooperation with Stanford University and with funding from Peter Thiel. The San Francisco Chronicle described the first conference as a “Bay Area coming-out party for the tech-inspired philosophy called transhumanism“.[4][5] In 2011, its offices were four apartments in downtown Berkeley.[6] In December 2012, the institute sold its name, web domain, and the Singularity Summit to Singularity University,[7] and in the following month took the name “Machine Intelligence Research Institute”.[8]

In 2014 and 2015, public and scientific interest in the risks of AI grew, increasing donations to fund research at MIRI and similar organizations.[3][9]: 327 

In 2019, Open Philanthropy recommended a general-support grant of approximately $2.1 million over two years to MIRI.[10] In April 2020, Open Philanthropy supplemented this with a $7.7M grant over two years.[11][12]

In 2021, Vitalik Buterin donated several million dollars worth of Ethereum to MIRI.[13]

Research and approach

Nate Soares presenting an overview of the AI alignment problem at Google in 2016

MIRI’s approach to identifying and managing the risks of AI, led by Yudkowsky, primarily addresses how to design friendly AI, covering both the initial design of AI systems and the creation of mechanisms to ensure that evolving AI systems remain friendly.[3][14][15]

MIRI researchers advocate early safety work as a precautionary measure.[16] However, MIRI researchers have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is “just around the corner”.[14] MIRI has funded forecasting work through an initiative called AI Impacts, which studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware.[17]

MIRI aligns itself with the principles and objectives of the effective altruism movement.[18]

Works by MIRI staff

See also

References

  1. ^ a b “MIRI: Artificial Intelligence: The Danger of Good Intentions – Future of Life Institute”. Future of Life Institute. 11 October 2015. Archived from the original on 28 August 2018. Retrieved 28 August 2018.
  2. ^ a b Khatchadourian, Raffi. “The Doomsday Invention”. The New Yorker. Archived from the original on 2019-04-29. Retrieved 2018-08-28.
  3. ^ a b c Waters, Richard (31 October 2014). “Artificial intelligence: machine v man”. Financial Times. Archived from the original on 27 August 2018. Retrieved 27 August 2018.
  4. ^ Abate, Tom (2006). “Smarter than thou?”. San Francisco Chronicle. Archived from the original on 11 February 2011. Retrieved 12 October 2015.
  5. ^ Abate, Tom (2007). “Public meeting will re-examine future of artificial intelligence”. San Francisco Chronicle. Archived from the original on 14 January 2016. Retrieved 12 October 2015.
  6. ^ Kaste, Martin (January 11, 2011). “The Singularity: Humanity’s Last Invention?”. All Things Considered, NPR. Archived from the original on August 28, 2018. Retrieved August 28, 2018.
  7. ^ “Press release: Singularity University Acquires the Singularity Summitt”. Singularity University. 9 December 2012. Archived from the original on 27 April 2019. Retrieved 28 August 2018.
  8. ^ “Press release: We are now the “Machine Intelligence Research Institute” (MIRI) – Machine Intelligence Research Institute”. Machine Intelligence Research Institute. 30 January 2013. Archived from the original on 23 September 2018. Retrieved 28 August 2018.
  9. ^ Tegmark, Max (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. United States: Knopf. ISBN 978-1-101-94659-6.
  10. ^ “Machine Intelligence Research Institute — General Support (2019)”. Open Philanthropy Project. 2019-03-29. Archived from the original on 2019-10-08. Retrieved 2019-10-08.
  11. ^ “Machine Intelligence Research Institute — General Support (2020)”. Open Philanthropy Project. 10 March 2020. Archived from the original on April 13, 2020.
  12. ^ Bensinger, Rob (April 27, 2020). “MIRI’s largest grant to date!”. MIRI. Archived from the original on April 27, 2020. Retrieved April 27, 2020.
  13. ^ Maheshwari, Suyash (2021-05-13). “Ethereum creator Vitalik Buterin donates $1.5 billion in cryptocurrency to India COVID Relief Fund & other charities”. MSN. Archived from the original on 2021-08-24. Retrieved 2023-01-23.
  14. ^ a b LaFrance, Adrienne (2015). “Building Robots With Better Morals Than Humans”. The Atlantic. Archived from the original on 19 August 2015. Retrieved 12 October 2015.
  15. ^ Russell, Stuart; Norvig, Peter (2009). Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4.
  16. ^ Sathian, Sanjena (4 January 2016). “The Most Important Philosophers of Our Time Reside in Silicon Valley”. OZY. Archived from the original on 29 July 2018. Retrieved 28 July 2018.
  17. ^ Hsu, Jeremy (2015). “Making Sure AI’s Rapid Rise Is No Surprise”. Discover. Archived from the original on 12 October 2015. Retrieved 12 October 2015.
  18. ^ “AI and Effective Altruism”. Machine Intelligence Research Institute. 2015-08-28. Archived from the original on 2019-10-08. Retrieved 2019-10-08.

Further reading


    Discuss

    OnAir membership is required. The lead Moderator for the discussions is AGI Policy. We encourage civil, honest, and safe discourse. For more information on commenting and giving feedback, see our Comment Guidelines.

    This is an open discussion on the contents of this post.

    Home Forums Open Discussion

    Viewing 1 post (of 1 total)
    Viewing 1 post (of 1 total)
    • You must be logged in to reply to this topic.
    Skip to toolbar